Scenario Generation for Cooling, Heating, and Power Loads Using Generative Moment Matching Networks
Wenlong Liao, Yusen Wang, Yuelong Wang, Kody Powell, Qi Liu, Zhe Yang
Abstract —Scenario generations of cooling, heating, and power loads are of great significance for the economic operation and stability analysis of integrated energy systems. In this paper, a novel deep generative network is proposed to model cooling, heating, and power load curves based on generative moment matching networks (GMMN) where an auto-encoder transforms high-dimensional load curves into low-dimensional latent variables and the maximum mean discrepancy represents the similarity metrics between the generated samples and the real samples. After training the model, the new scenarios are generated by feeding Gaussian noise to the generator of the GMMN. Unlike the explicit density models, the proposed GMMN does not need to artificially assume the probability distribution of the load curves, which leads to stronger universality. The simulation results show that GMMN not only fits the probability distribution of multiple load curves well, but also accurately captures the shape (e.g., large peaks, fast ramps, and fluctuation), frequency-domain characteristics, and temporal-spatial correlations of cooling, heating, and power loads. Furthermore, the energy consumption of generated samples closely resembles that of real samples.
Index Terms —Scenario generations, generative moment matching networks, deep learning, integrated energy systems. I. I NTRODUCTION ntegrated energy systems coupled with cooling, heating, and electric power energies can improve energy efficiency and meet the needs of islands, which have become increasingly popular in recent years [1]. To better coordinate and control flexible resources in integrated energy systems such as heat pumps, electric vehicles, stored energy, and air conditioners, it is necessary to accurately model cooling, heating, and power loads [2]. One widely used method to model these loads in integrated energy systems is generating a set of stochastic scenarios. By using a set of possible time series scenarios, system operators can make decisions which account for the
W. Liao is with the Department of Energy Technology, Aalborg University, Aalborg 9220, Denmark. Y. Wang is with the School of Electrical Engineering and Computer Science, KTH Royal Institute of Technology, Stockholm, SE-100 44, Sweden. Y. Wang is with the State Grid Tianjin Chengxi Electric Power Supply Branch, Tianjin 300100, China. K. Powell is with the Department of Chemical Engineering, University of Utah, UT 84112, America. Q. Liu (Corresponding author, e-mail: [email protected]) is with the Key Laboratory of Smart Grid of Ministry of Education, Tianjin University, Tianjin 300072, China. uncertainties of cooling, heating, and power loads, such as stochastic optimization and robust optimization [3]. Therefore, the scenario generations of cooling, heating, and power loads are of great significance for the operation and planning of integrated energy systems. The main idea of stochastic scenario generation is to generate a set of new samples similar to the historical load curves, which are used to train generative models. With respect to whether the probability distributions of load curves are needed, the existing methods for stochastic scenario generation can be divided into two categories: explicit density models and implicit density models [4]. Specifically, explicit density models need to artificially assume the probability distribution of load curves, and use historical samples to fit the key parameters in the probability distribution. For example, a Gaussian mixture model is proposed to represent the distribution characteristics of power loads in [5], and then the Monte Carlo method is used to generate stochastic scenarios of the power loads. Similarly, Ref. [6] approximated the probability distribution of loads with a normal distribution, and obtained new load curves through the Latin hypercube sampling method. In order to take into account the spatial correlation between multiple nodes when generating stochastic scenarios, the Copula theory is used to construct a joint distribution function of loads in [7], and the load curves of multiple nodes are obtained simultaneously by sampling. In general, load curves generated by explicit density models are of poor quality, since they rely on the probability distribution of the load curves which are assumed artificially. Additionally, the probability distribution of load curves is unknown most of the time, and it is difficult to accurately represent it with mathematical formulas. The probability distribution of load curves in different times and regions are also different, which makes the explicit density models not universal [8]. In contrast, the implicit density models do not require explicit likelihood estimation or artificial assumption of the probability distribution of the load curves. After training models, stochastic scenarios obeying the potential probability distribution are obtained by inputting noise to the generator. In addition, the implicit density models can be applied to stochastic scenario generation of various loads in different times and regions by adjusting the structure and parameters [9]. For stochastic scenario generation in integrated energy systems, existing implicit density models mainly include the hidden Markov model (HMM), generative adversarial network (GAN), and variational auto-encoder (VAE) [10]. Specifically, HMM is
Wenlong Liao, Yusen Wang, Yuelong Wang, Kody Powell, and Qi Liu
Scenario Generation for Cooling, Heating, and Power Loads Using Generative Moment Matching Networks I ften used in data generation tasks, because of its simple structure and clear physical meaning [11]. However, due to the assumption of independence of its output variables, the context information is ignored in HMM, and it is difficult to capture the spatial-temporal characteristics of the load curves. Both VAE and GAN are powerful generative models in deep learning, and have been extensively and independently studied in the stochastic scenario generation tasks of distribution networks [12]. Nevertheless, VAE can only approximate the lower bound of the log-likelihood of the real load curves, which leads to the poor quality of the new samples generated by VAE. The vanishing gradients and exploding gradients problems of the GAN in the training process still exist in previous publications and these problems limit the quality of generated scenarios [13], [14]. The generative moment matching network (GMMN) is a new deep generative network widely used in the field of computer vision [15]. Compared with the other generative networks such as VAE and GAN, GMMN presents the more stable training process and higher quality generated samples [16], since it directly uses the maximum mean discrepancy (MMD) to represent the similarity metrics between the generated samples and the real samples. At present, GMMN has shown excellent performance in many fields such as image de-noising, image generation, voice synthesis, and style transfer [17], [18]. The successful applications of GMMN in the image and videos prove that it can learn complex objective laws of high dimensional data through unsupervised training. In theory, GMMN can not only use deep convolutional neural networks with strong learning ability to effectively extract latent representations from cooling, heating, and power loads, but also employ the MMD as the loss function to reduce the distance between generated samples and real samples, so as to greatly improve the quality of new stochastic scenarios. However, the existing structures and parameters of GMMN are designed for computer vision, which is not suitable for the 1-dimensional time series of loads. Therefore, how to design a structure of GMMN with strong feature extraction ability and high-quality generated samples according to the characteristics of cooling, heating, and power loads needs further research. In this paper, it is aimed to design a GMMN to improve the quality of stochastic scenario generation for cooling, heating, and power loads. The performance of the proposed method is tested by a real-world dataset. The key contributions of this paper include: 1) A novel data-driven, model-free, and scalable method is proposed for stochastic scenario generation of cooling, heating, and electric power loads. By employing the deep convolutional neural network and MMD, it generates scenarios which accurately capture the hallmark characteristics (e.g., large peaks, fast ramps, and fluctuation), frequency-domain characteristics, and temporal correlation of cooling, heating, and power loads. To our knowledge, this is the first work applying GMMN to stochastic scenario generation of integrated energy systems. 2) Unlike the explicit density models, the proposed GMMN does not need to artificially assume the probability distribution of the load curves, which leads to stronger universality. By adjusting the structure and parameters of the network, GMMN can simultaneously generate stochastic scenarios of cooling, heating, and power loads accounting for spatial correlations. After training, Gaussian noise is fed to GMMN to generate any number of stochastic scenarios, which provides sufficient data support for uncertain optimization and decision-making of integrated energy systems. 3) Compared with GAN, where the loss function fluctuates sharply and is difficult to converge, GMMN converges more quickly, and the entire training process is relatively stable. Besides, GMMN fits the probability distribution characteristics of cooling, heating, and power loads better than other popular generative models such as the Copula method, VAE, and GAN. The rest of this paper is organized as follows. Section II explains the structure and parameters of GMMN. Section III introduces the process of stochastic scenario generation based on GMMN. Section IV performs the simulations. Section V summarizes the conclusions. II. G ENERATIVE M OMENT M ATCHING N ETWORKS
As shown in Fig. 1, GMMN consists of an auto-encoder and a generator. Firstly, the real samples are used to train the auto-encoder composed of an encoder and a decoder. The mean square error (MSE) between the real samples and the generated fake samples is regarded as the loss function, which is utilized to update the weights of the auto-encoder. Secondly, Gaussian noise is fed to the generator to obtain new samples. To update the weights of the generator, the MMD between the new sample and the real samples is calculated by the trained encoder from the auto-encoder. After the training, the stochastic scenarios of cooling, heating, and power loads can be obtained by feeding Gaussian noise to the generator.
Fig. 1. The framework of GMMN for stochastic scenario generation. A. Generator of GMMN
The main idea of the generator is to sample a simple prior distribution Z ~ P z ( z ) (e.g., Gaussian distribution) to obtain a noise vector Z . Then, a convolutional neural network composed of transposed convolutional layers is selected to represent the complex nonlinear relationship between the noise vector Z and the real load curves due to its powerful feature extraction ability … … … Latent variablesFakesamplesEncoderDecoderReal samples … … … … Update weight NoiseGeneratorFake samples … … … … … … … Trainedencoder … Latent variables MMD function
Auto-encoder Generator
MSE function Update weight g g g * g f X Z W B (1) where X g denotes the new load curves generated by the generator; g f denotes the activation function of the generator; W g and B g denote the weight matrix and bias vector of the generator, respectively; and * denotes the transposed convolutional operation. Unlike GAN, whose training process involves difficult min-max optimization problems, GMMN is comparatively simple, since it chooses to minimize a straightforward loss function. Specifically, MMD is a very popular frequency estimator to compare the similarity metrics between two datasets and whether the samples are from the same distribution [20]. Therefore, MMD is used to measure the difference between the generated load curves X g and the real load curves X r . Its mathematical formula is: N Mi ji j x xN M (2) where N denotes the number of generated load curves; M denotes the number of real load curves; and denotes a transformation function, which leads to matching the difference of sample. Obviously, each term in Eq. (2) only involves inner products between vectors, and thus load curves can be transferred into a new space by kernel tricks. Its mathematical formula is: 'g, g, g, r,MMD 2 1 1 11 'g, r,2 1 1 N N N Mi i i ji i jiM M j jj j k x x k x xNMN k x xM (3) where ( ) k denotes a kind of kernel; 'g, i x denotes the representation of generated load curves in the new space; and 'g, i x is the representation of real load curves in the new space. Furthermore, if in Eq. (2) is an identity transformation, MMD is equivalent to the mean difference between the generated load curves and the real load curves. If is a quadratic transformation, MMD is equivalent to the second-order moment between the generated load curves and the real load curves. In the same way, If includes all term transformation, MMD covers all order moments and the probability distribution of load curves [21], so this network structure is called the generative moment matching network. To make the generated load curves and the real load curves have the same distribution, GMMN should include all term transformation. The Gaussian function can be converted into an infinite series through Taylor expansion, which just meets the requirement of Eq. (2) to calculate each moment. Therefore, the kernel in Eq. (3) uses Gaussian kernel:
1, exp 2 k x x x x (4) where is the bandwidth parameter. So far, GMMN has obtained the loss function, and the weight of the generative model can be updated by the chain rule and gradient descent method to complete the training process. B. Auto-encoder of GMMN
Due to the need to generate stochastic scenarios of cooling, heating, and power loads at the same time, the dimension of load curves generated by GMMN is very high, which is not conducive to calculating the loss function. Fortunately, there are strong temporal-spatial correlations among cooling, heating, and power loads, i.e. high-dimensional load curves can be represented by low-dimensional manifold. This is beneficial for statistical estimators such as the MMD, since the volume of data required to generate a reliable estimator grows with the dimension of the data [22]. Therefore, this paper uses an auto-encoder to map the high-dimensional load curves into low-dimensional latent variables, which are used to calculate the MMD loss function. The auto-encoder is a kind of unsupervised neural network that is composed of an encoder and a decoder, and its loss function is to make the input data equal to the output data through unsupervised learning [23]. Specifically, the encoder maps high-dimensional load curves to low-dimensional latent variables, which reflect the main characteristics of the original input data. Then, the decoder reconstructs the latent variables into new load curves similar to the input data. Take the encoder and decoder constructed by dense layers as an example to illustrate the data stream transmission process of the auto-encoder. In the encoding process, the input data of the encoder is the real load curves X r , which are passed through multiple dense layers to obtain low-dimensional latent variables H . The mathematical formula of the encoder is: e r e e f H X W B (5) where e f denotes the activation function of the encoder; W e and B e denote the weight matrix and bias vector of the encoder, respectively. In the decoding process, the input data of the decoder is the low-dimensional latent variables H , which are passed through multiple dense layers to obtain reconstructed load curves X d . The mathematical formula of the decoder is: d d d d f X HW B (6) where d f denotes the activation function of the decoder; W d and B d denote the weight matrix and bias vector of the decoder, respectively. The goal of auto-encoder is to make the real load curves and reconstructed load curves as similar as possible, so the loss function L AE can be defined as MSE: m i ii L x xm (7) where m is the number of load curves; X r, i is the i th elements of the real load curves; and X d, i is the i th elements of the reconstructed load curves. III. S TOCHASTIC S CENARIO G ENERATION VIA
GMMN When GMMN is used for stochastic scenario generation of loads, the generator of GMMN will be affected by physical factors such as the temporal-spatial correlations between cooling, heating, and power loads. Although stochastic cenario generation is different from the data generation in the field of computer vision, the process is similar. Specifically, the core process of stochastic scenario generation for cooling, heating, and power loads is shown in Fig. 2. The steps are as follows: 1) Process data The data set is divided into the training set and test set. 80% of the load curves are randomly selected to train the GMMN, and the remaining samples are used to evaluate the performance of the GMMN. Before feeding the load curves into the model, the load curves need to be normalized, and otherwise the loss functions of auto-encoder and generator may not converge. Therefore, the minimum-maximum normalization method is selected to transform load curves into the range of 0 to 1: ' minmax min
X XX X X (8) where X is the load before normalization; ' X is the load after normalization; X max the maximum value of the load; and X max the minimum value of the load. 2) Train auto-encoder After initializing the network structure, the normalized training samples are fed into the encoder. The decoder takes the low-dimensional latent variables output by the encoder as input data, and then outputs the reconstructed load curves. Next, real load curves and reconstructed load curves are used to calculate MSE and update the weights of the encoder and decoder. When the set number of iterations is reached, the encoder will be used in the training process of the generator. 3) Train GMMN After initializing the network structure, the Gaussian noise is fed into the generator to obtain new load curves similar to real load curves. The trained encoder transforms the real load curves and new load curves into latent variables for calculating the MMD loss function, which is used to update the weights of the generator. When the set number of iterations is reached, the generator will be used to generate stochastic scenarios for cooling, heating, and power loads. 4) Evaluate performance After Gaussian noise is input into the trained generator, the output result is de-normalized to obtain new load curves. Finally, the test set is used to measure whether the new load curves have similar temporal-spatial correlations and distribution characteristics with real load curves. Fig. 2. Process of stochastic scenario generation.
IV. C ASE S TUDY A. Dataset and Simulation Tools
In order to fully verify the effectiveness of the algorithm proposed in this paper, the real dataset from the University of Texas at Austin is used for simulation and analysis [24]. The Hal C. Weaver power plant and its associated facilities are in charge of providing all the cooling, heating, and power energies for the campus, which includes 70,000 students, staff, faculty, and 160 buildings. This dataset counts the hourly cooling, heating, and power needs from July 17, 2011 to September 4, 2012. The programs of GMMN for stochastic scenario generations of cooling, heating, and power loads are implemented in Spyder 3.2.8 with Tensorflow 1.12.0 and Keras 2.2.4 library. Some parameters of the laptop are: 8 GB of memory, Intel(R) Core(TM) i5-10210U, the processor is @1.60GHz and 2.11GHz. B. Structure and parameters of GMMN
In order to make GMMN have high performance for stochastic scenario generations of cooling, heating, and power loads, the control variable method in [25] is utilized to search the suitable structures and parameters of GMMN, as shown in Fig. 3. (a) (b)
Fig. 3. Structure and parameters of GMMN. (a) Auto-encoder. (b) Generator.
For the auto-encoder, its encoder consists of three dense layers with the rectified linear unit (ReLU) activation function.
Initialize structureGenerate samplesUpdate weightsTrained generator
Train GMMN
Initialize structureEncode and decodeUpdate weightsTrained encoder
Train auto-encoder
Obtain Gaussian noise Trained generatorGenerate samplesEvaluate performance
Evaluate performance
Load data setDivide data setNormalizationReshape data
Preprocess data
Input time series of loadsReshape functionDense, Unit=64, ReLU 3×2472×164×1
Output SizeStructure of auto-encoder
Dense, Unit=32, ReLU 32×1Dense, Unit=16, ReLUDense, Unit=16, ReLU 16×116×1Dense, Unit=32, ReLUDense, Unit=64, ReLUDense, Unit=72, Tanh 32×164×172×1Reshape function 24×3
Input Gaussian noisesDense, Unit=128, ReLUConv2DTran,filters=32,strides=2BatchNorm,ReLU,kernel=2 32×1128×14×4×328×8×16
Output SizeStructure of generator
Reshape functionDiscard redundant dataConv2DTran,filters=16,strides=2BatchNorm, ReLU,kernel=2Conv2DTran,filters=1,strides=2BatchNorm,Tanh,kernel=1 9×9×181×172×1Reshape function 2×2×32Reshape function 24×3
Fig. 4. Training evolution of GMMN.
The MMD loss function of GMMN decreases rapidly as the number of iterations increases. When the number of iterations is greater than 100, the MMD loss function of the generator tends to a constant value, indicating that GMMN has converged. Unlike GAN where the loss function fluctuates sharply and is difficult to converge, GMMN converges very quickly, and the entire training process is relatively stable. C. Simulation results and analysis
To check whether the new samples generated by GMMN and real samples have similar patterns, 2000 Gaussian noise samples are fed to the generator of GMMN to obtain the corresponding cooling, heating, and power curves. Then, a part of real samples are randomly selected from the test set, and the Euclidean distances between the new samples and the selected real samples are calculated. Finally, the selected real samples and their closest new samples are visualized, as shown in Fig. 5. As shown in the first row of Fig. 5, the shapes of generated cooling, heating, and power curves are very similar to those of the real samples, so that it is hard to identify them using a naked eye. The GMMN accurately captures the hallmark characteristics of cooling, heating, and power load curves, such as large peaks, fast ramps, and fluctuation. Furthermore, the real samples of the test set did not participate in the training process of GMMN, and the cooling, heating, and power load curves generated by GMMN are consistent with the shapes of real samples in the test set, which shows that GMMN has strong generalization ability. In addition to shapes of multiple load curves, some statistical properties between real samples and new samples should be verified. The auto-correlation function represents the temporal correlation of a time series, and capturing the correct temporal behavior is of great importance to operations of integrated energy systems. Therefore, the auto-correlation function is employed to compare the temporal correlation between real samples and new samples. Its mathematical formula is: [( )( )( ) t t E x xR (9) where x t is a point of the load curve at the time t ; is the mean of the load curve; is the variance of the load curve; is the lag time; and E is the expected value. The second row of Fig. 5 shows the autocorrelation functions of the cooling, heating, and power load curves. It is found that the trends of autocorrelation functions of the generated samples closely resemble those of the real samples, which indicates that GMMN is able to accurately capture the temporal correlation of the real cooling, heating, and power load curves. The fluctuations and frequency-domain characteristics of cooling, heating, and power loads have a great influence on the operation of integrated energy systems. The power spectral density (PSD) represents the energy value of frequency components of load curves, and it is often utilized to measure the frequency-domain characteristics [26]. Its mathematical formula is: TT P x t dtT (10) where P sd is the power spectral density; and T is the period. In this paper, the periodogram function from MATLAB2018a is employed to obtain the PSDs of these load curves. The third row of Fig. 5 shows the PSDs of cooling, heating, and power load curves. It is obvious that the trends of PSDs between the generated samples for different loads and the real samples are basically the same, which shows that the real samples generated by GMMN can reflect the fluctuation components of multiple load curves at different frequencies of the real samples well. Load duration curves represent the variation of a certain load in a downward form that the minimum value is plotted on the right and the maximum value is plotted on the left. The area under the load duration curves denotes the energy needs per day. Its mathematical formula is: , ,1 m i jj i j i ji i j P Pt q q i m j nP P (11) where m is the size of the load curve; n is the number of intervals for load curves; and t j is the time when the loads are greater than j th element P j of the load curve. From the fourth row of the Fig.5, it can be found that the load duration curves of the real samples and the generated samples are extremely similar, and the areas enclosed by the X-axis and Y-axis are basically the same, which shows that the total energy consumption of the cooling, heating, and power load curves generated by GMMN in one day is consistent with the actual scenarios. Fig. 5. Visualization of real samples from the test set and new samples generated by GMMN. (a) Sample 1. (b) Sample2.
As one of the best indicators measuring the association between continuous variables, the Pearson correlation coefficient is often used to evaluate the linear relationship of load curves at various look-ahead times [4]. Its mathematical formula is: ( )( )( ) ( ) m i iixy m mi ii i x x y yp x x y y (12) where p xy is the Pearson correlation coefficient between x and y ; x is the mean of x ; and y is the mean of y . To validate whether new samples generated from GMMN have the similar temporal correlation as the real samples, Fig. 6, Fig. 7, and Fig. 8 visualize the covariance matrix of real samples and generated samples for cooling, heating, and power load curves. Fig. 6. The Pearson correlation matrix of cooling loads. (a) Real samples. (b) New samples generated by GMMN.
Fig. 7. The Pearson correlation matrix of heating loads. (a) Real samples. (b) New samples generated by GMMN.
Fig. 8. The Pearson correlation matrix of power loads. (a) Real samples. (b) New samples generated by GMMN.
The following conclusions can be drawn from the Fig. 6 to Fig. 8: 1) Although the Pearson correlation coefficients between current loads and previous loads decrease with the increase of time, they are always greater than 0.8, which indicates that there is a strong temporal correlation in cooling, heating, and power load curves. Specifically, the temporal correlation in heating load curves is the strongest, since the Pearson correlation coefficients between current heating loads and previous heating loads are always greater than 0.9, while (a) (b) GMMNReal (a) (b) (a) (b) (a) (b)
Fig. 9. The Pearson correlation matrix among multiple loads. (a) Real samples. (b) New samples generated by GMMN.
Obviously, the Pearson coefficient matrix of new samples has a small difference from that of the real sample, and the maximum error is 0.029, which indicates that GMMN can well capture the spatial correlation among multiple loads. Besides verifying the above properties, Fig. 10 shows the probability distribution functions (PDFs) of historical samples and new samples generated by GMMN and popular baselines such as the Copula method [7], VAE [12], and GAN [13]. (a) (b) (c)
Fig. 10. PDFs of multiple loads. (a) Cooling loads. (b) Heating loads. (c) Power loads.
It is found that the difference of PDFs between real samples and the new samples generated by GMMN is very small, and three PDFs of GMMN are closer to those of real samples than the existing methods, which indicates the capability of GMMN to generate new samples for cooling, heating, and power loads with the correct marginal distributions. V. C ONCLUSION A ND F UTURE W ORKS
To improve the quality of stochastic scenario generation for cooling, heating, and power loads, this paper proposes a novel data-driven method based on GMMN. Through the simulation analysis on a real dataset, the following conclusions are obtained: 1) Unlike GAN where the loss function fluctuates sharply and is difficult to converge, GMMN converges very quickly, and the entire training process is relatively stable. Besides, GMMN fits the probability distribution characteristics of cooling, heating, and power loads more than some popular methods such as the Copula method, VAE, and GAN. 2) Simulation results show that the GMMN accurately captures the hallmark characteristics (e.g., large peaks, fast ramps, and fluctuation), frequency-domain characteristics, and temporal correlation of cooling, heating, and power load curves. In addition, the energy consumption of generated samples closely resembles that of real samples. 3) GMMN takes into account the spatial correlation among multiple loads when generating the new stochastic scenarios, which is in line with the actual scenes. For future works, GMMN can be extended to conditional GMMN where Gaussian noise and labels are fed into the generator to obtain the stochastic scenarios with specified properties such as heavy loads and light loads. (b) Cool Heat Power C oo l H ea t P o w e r (a) Cool Heat Power C oo l H ea t P o w e r EFERENCES [1]
Y. J. Qin, L. L. Wu, J. H. Zheng, M. S. Li, Z. X. Jing, Q. H. Wu, X. X. Zhou, and F. Wei, “Optimal operation of integrated energy systems subject to coupled demand constraints of electricity and natural gas,”
CSEE Journal of Power and Energy Systems , vol. 6, no. 2, pp. 444-457, Jun. 2020. [2]
H. Ahn, J. D. Freihaut, and D. Rim, “Economic feasibility of combined cooling, heating, and power (CCHP) systems considering electricity standby tariffs,”
Energy , vol. 169, pp. 420-432, Feb. 2019. [3]
Q. Z. Zhao, W. L. Liao, S. X. Wang, and J. R. Pillai, “Robust voltage control considering uncertainties of renewable energies and loads via improved generative adversarial network,”
Journal of Modern Power Systems and Clean Energy , vol. 8, no. 6, pp. 1104-1114, Nov. 2020. [4]
L. J. Ge, W. L. Liao, S. X. Wang, B. B. Jensen, and J. R. Pillai, “Modeling daily load profiles of distribution network for scenario generation using flow-based generative network,”
IEEE Access , vol. 8, pp. 77587-77597, Apr. 2020. [5]
Z. W. Wang, C. Shen, F. Liu, and F. Gao, “Analytical expressions for joint distributions in probabilistic load flow,”
IEEE Transactions on Power Systems , vol. 32, no. 3, pp. 2473-2474, May. 2017. [6]
J. Gao, W. Du, H. F. Wang, and L. Y. Xiao, “Probabilistic load flow using latin hypercube sampling with dependence for distribution networks,” in , M. Z. Zhang, Y. C. Huang, M. H. Yuan, M. Wang, and X. Y. Sun, “Correlation analysis between load and output of renewable energy generation based on time-varying Copula theory,” in , W. Hu, H. X. Zhang, Y. Dong, Y. T. Wang, L. Dong, and M. Xiao, “Short-term optimal operation of hydro-wind-solar hybrid system with improved generative adversarial networks,”
Applied Energy , vol. 250, pp. 389-403, Sept. 2019. [9]
J. K. Liang, and W. Y. Tang, “Sequence generative adversarial networks for wind power scenario generation,”
IEEE Journal on Selected Areas in Communications , vol. 38, no. 1, pp. 110-118, Jan. 2020. [10]
C. G. Turhan, and H. S. Bilge, “Recent trends in deep generative models: a review,” in , E. Messina, and D. Toscani, “Hidden Markov models for scenario generation,”
IMA Journal of Management Mathematics , vol. 19, no. 4, pp. 379-401, Oct. 2008. [12]
Z. X. Pan, J.M. Wang, W. L. Liao, H. W. Chen, D. Yuan, W. P. Zhu, X. Fang, and Z. Zhu, “Data-driven EV load profiles generation using a variational auto-encoder,”
Energies , vol. 12, no. 5, pp. 1-15, Mar. 2019. [13]
Y. Z. Chen, Y. S. Wang, D. Kirschen, and B. S. Zhang, “Model-free renewable scenario generation using generative adversarial networks,”
IEEE Transactions on Power Systems , vol. 33, no. 3, pp. 3265-3275, May. 2018. [14]
C. Ren, and Y. Xu, “A fully data-driven method based on generative adversarial networks for power system dynamic security assessment with missing data,”
IEEE Transactions on Power Systems , vol. 34, no. 6, pp. 5044-5052, Nov. 2019. [15]
H. C. Gao, and H. Huang, “Joint generative moment-matching network for learning structural latent code,” in , Y. J. Li, K. Swersky, and R. Zemel, “Generative moment matching networks,” in , H. Tamaru, Y. Saito, S. Takamichi, T. Koriyama, and H. Saruwatari, “Generative moment matching network-based neural double-tracking for synthesized and natural singing voices,”
IEICE Transactions on Information and Systems , vol. 103, no. 3, pp. 639-647, Mar. 2020. [18]
H. Tamaru, Y. Saito, S. Takamichi, T. Koriyama, and H. Saruwatari, “Joint generative moment-matching network for learning structural latent code,” in , W. Qiu, Q. Tang, J. Liu, and W. X. Yao, “An automatic identification framework for complex power quality disturbances based on multifusion convolutional neural network,”
IEEE Transactions on Industrial Informatics , vol. 16, no. 5, pp. 3233-3241, May. 2020. [20]
Y. M. Chen, S. J. Song, S. Li, and C. Wu, “A graph embedding framework for maximum mean discrepancy-based domain adaptation algorithms,”
IEEE Transactions on Image Processing , vol. 29, pp. 199-213, Jul. 2019. [21]
H. L. Yan, Z. T. Li, Q. L. Wang, P. H. Li, Y. Xu, and W. M. Zuo, “Weighted and class-specific maximum mean discrepancy for unsupervised domain adaptation,”
IEEE Transactions on Multimedia , vol. 22, no. 9, pp. 2420-2433, Sept. 2020. [22]
A. Ramdas, S. J. Reddi, B. Póczos, A. Singh, and L. Wasserman, “On the decreasing power of kernel and distance based nonparametric hypothesis tests in high dimensions,” in
The Twenty-Ninth AAAI Conference on Artificial Intelligence , X. Cheng, Y. F. Zhang, L. Zhou, and Y. H. Zheng, “Visual tracking via auto-encoder pair correlation filter,”
IEEE Transactions on Industrial Electronics , vol. 67, no. 4, pp. 3288-3297, Apr. 2020. [24]
K. M. Powell, A. Sriprasad, W. J. Cole, and T. F. Edgar, “Heating, cooling, and electrical load forecasting for a large-scale district energy system,”
Energy , vol. 74, pp. 877-885, Sept. 2014. [25]
W. L. Liao, D. C. Yang, Y. S. Wang, and X. Ren. (2020, Dec.) “Fault diagnosis of power transformers using graph convolutional network,”
CSEE Journal of Power and Energy Systems . [Online]. Available: https://ieeexplore.ieee.org/document/9299500 [26]
H. I. Choi, G. J. Noh, and H. C. Shin, “Measuring the depth of anesthesia using ordinal power spectral density of electroencephalogram,”
IEEE Access , vol. 8, pp. 50431-50438, Mar. 2020., vol. 8, pp. 50431-50438, Mar. 2020.