Classifying the Equation of State from Rotating Core Collapse Gravitational Waves with Deep Learning
CClassifying the Equation of State from Rotating Core Collapse GravitationalWaves with Deep Learning
Matthew C. Edwards
Department of Statistics, University of Auckland, Auckland, New Zealand
In this paper, we seek to answer the question “ given an image of a rotating core collapse gravi-tational wave signal, can we determine its nuclear equation of state? ”. To answer this question, weemploy a deep convolutional neural network to learn visual patterns embedded within rotating corecollapse gravitational wave (GW) signals in order to predict the nuclear equation of state (EOS).Using the 1824 rotating core collapse GW simulations by Richers et al [29], which has 18 differentnuclear EOS, we consider this to be a classic multi-class image classification problem. We attain upto 71% correct classifications in the test set, and if we consider the “top 5” most probable labels,this increases to up to 97%, demonstrating that there is a moderate and measurable dependence ofthe rotating core collapse GW signal on the nuclear EOS.
I. INTRODUCTION
To date, gravitational waves (GWs) from stel-lar core collapse have not been directly observedby the network of terrestrial detectors, AdvancedLIGO and Advanced Virgo [2]. However, they area promising source [17], and we could learn a greatdeal about the dynamics of the core collapse, andthe shock revival mechanism that leads to explosion[22]. It may even be possible to constrain the nuclearequation of state (EOS).The death of massive stars (of at least 10 M (cid:12) atZAMS) begins when the star exhausts its thermonu-clear fuel through fusion, leaving an iron core that issupported by the pressure of relativistic degenerateelectrons. Once the core reaches the Chandrasekharlimit, electron degeneracy pressure cannot supportit, and collapse ensues. The core compresses, in-creasing in density, and squeezing protons and elec-trons together to create neutrons and neutrinos viaelectron-capture. The strong nuclear force halts thecollapse by a stiffening of the nuclear EOS, whichcauses the inner core to rebound (or bounce), creat-ing a shock wave that blasts into the in-falling outercore. The shock wave on its own is not strong enoughto generate a supernova explosion, leading to a num-ber of competing theories of the shock-revival suchas the neutrino-driven mechanism and the magne-torotational mechanism [3, 9, 21, 22].Inferring the supernova explosion (or shock-revival) mechanism has been the primary focus ofthe parameter estimation literature for core collapseGWs (see e.g., Chan et al [6], Logue et al [24], Pow-ell et al [26, 27]) and this has naturally been treated as a classification problem due to the competingmechanisms (namely, the neutrino mechanism andthe magnetorotational mechanism) having distinctwaveform morphologies. Other efforts have focusedon estimating various parameters that have beennoted to significantly influence a rotating core col-lapse GW waveform, such as the ratio of rotationalkinetic energy to gravitational potential energy ofthe inner core at bounce, and the precollapse differ-ential rotation profile [3, 11].The nuclear EOS, however, is a poorly understoodpart of physics, though theoretical, experimental,and observational constraints are converging, lead-ing to greater insights about dense matter [23]. It ishoped that GW detectors such as Advanced LIGO[1], Advanced Virgo [4], and KAGRA [34] can helpconstrain the nuclear EOS [29]. There have beenvery limited attempts at conducting parameter esti-mation on the nuclear EOS from rotating core col-lapse GW signals. R¨over et al [30] used a Bayesianprincipal component regression model to reconstructa rotating core collapse GW signal and matched thisto the closest waveform in the Dimmelmeier et al [9]catalogue using a χ -distance. The EOS of the in-jected signal was classified as the EOS of the bestmatching catalogue signal. The lack of success inmaking statistical inferences about the nuclear EOSmay perhaps be partly due to the notion that it hasvery little influence on the GW signal [9, 29]. How-ever, in this paper, we demonstrate that it is pos-sible to correctly identify the nuclear EOS at leastapproximately two thirds of the time.Richers et al [29] provide the most in-depth studyof the EOS effect on rotating core collapse and a r X i v : . [ a s t r o - ph . I M ] S e p bounce GW signal and find that the signal is largelyindependent of the EOS. However, the signal cansee stronger dependence in the post-bounce proto-neutron star (PNS) oscillations in terms of the peakGW frequency. They find that its primary affect onthe GW signal is through its effect on the mass of theinner core at bounce and the central density of thepost-bounce oscillations. We use this waveform cat-alogue (publicly available through zenodo.org [28]),which contains 18 different nuclear EOS, and were-frame the problem as an 18-class image classifi-cation problem, and use a deep learning algorithmcalled the convolutional neural network (CNN) tosolve [16].Deep learning has already seen much success inthe field of GW astronomy. CNNs in particularhave been used for classification and identificationproblems, and much of the early literature focuseson the glitch classification problem. For example,Zevin et al [35] created the Gravity Spy projectwhich uses CNNs to classify glitches in AdvancedLIGO data, with image labels outsourced to citi-zen scientists. George et al [15] improve on thisby using deep transfer learning with pretrained im-ages to get an accuracy of 98.8%. In terms of theGW signal identification problem, Gabbard et al [12]use CNNs to identify between binary black hole sig-nals and noise, reproducing sensitivities achieved bymatched-filtering. George and Huerta [14] use aCNN method called Deep Filtering to identify bi-nary black hole signals in noise. They also use thisto conduct parameter estimation. Further, Dreissi-gacker et al [10] use CNNs to search for continuouswaves from unknown spinning neutron stars.Much effort has gone into computing low-latencyBayesian posteriors for binary black hole systemswith deep learning, particularly through the useof variational autoencoders. Gabbard et al [13]train conditional variational autoencoders to gener-ate Bayesian posteriors around six orders of mag-nitude faster than any other method. Green et al[19] use conditional variational autoencoders in con-junction with autoregressive normalizing flows anddemonstrate consistent results to standard Markovchain Monte Carlo (MCMC) methods, but withnear-instantaneous computation time. Green andGair [18] then generalize this further to estimateposteriors for the signal parameters of GW150914.Chua and Vallisneri [8] use multilayer perceptronsto compute one and two dimensional marginalized Bayesian posteriors. Shen et al [32] use Bayesianneural networks to constrain parameters of binaryblack holes before and after merger, as well as infer-ring final spin and quasi-normal frequencies.Deep learning recently began populating the corecollapse GW literature. Astone et al [5] trained phe-nomenological g -mode models with CNNs to searchfor core collapse supernovae GWs in multiple terres-trial detectors. They demonstrated that their CNNcan enhance detection efficiency and outperformsCoherent Wave Burst (cWB) at various signal-to-noise ratios. Iess et al [20] implement two CNNs(one on time series data, and one on spectrogramdata) to classify between core collapse GW signalsand noise glitches, achieving an accuracy of ∼ ∼ ∼
83% formagnetorotational signals at 60 kpc and up to ∼ II. DEEP CONVOLUTIONAL NEURALNETWORKS
The primary objective in machine learning is tolearn patterns and rules in training data in order tomake accurate predictions about previously unseen test data.
Deep learning is an area of machine learn-ing that transforms input data using multiple layers that progressively learn more meaningful represen-tations of the data [16]. Each layer mathematicallytransforms its input data into an output called a fea-ture map . The final step of each layer is to calculatethe values of the feature map using a non-linear acti-vation function . The feature map of one layer is theinput of the next layer, allowing us to sequentiallystack a network together.One of the most popular deep learning methods,particularly in the realm of computer vision and im-age classification, is the convolutional neural network (CNN) [7]. Inputs into CNNs are usually 2D images,and the primary objective is to predict the label (orclass) of each image. Feature maps in CNNs are usu-ally 3D tensors with two spatial axes (height andwidth) and one axis that determines the depth ofthe layer. These determine the number of trainableparameters in each layer. Colour images (as inputsinto CNNs) have depth 3 when using the RGB colourspace; one channel each for red, green, and blue.These can be transformed through successive layersinto feature maps with arbitrary depths, which en-code more abstract features than the three colourchannels. We can therefore think of each layer asapplying filters to its input to create a feature map.At the final layer, we get a prediction, ˆ y . Inthe context of image classification ˆ y will be a prob-ability mass function across all the image classes, c = 1 , , . . . , C . This output is compared to thetruth y , which in image classification is a Kroneckerdelta function (i.e., 1 for the true class and 0 other-wise). A distance between y and ˆ y computed using a loss function that measures how well the algorithmhas performed when making its prediction. The keystep in deep learning is to feed this information backthrough the layers in order to tune the network’sparameters. This involves using the backpropagation algorithm which implements an optimization routineto minimize the loss function, and often uses variousforms of stochastic gradient descent and the chainrule.CNNs use three different types of layers stacked together to create a network architecture. Theseare convolutional layers, pooling layers, and fully-connected layers. In the first instance, a convolu-tional layer will apply the convolution operation tolearn abstract local patterns (such as edges) in im-ages by considering small 2D sliding windows, pro-ducing an output feature map (of specified depth).Additional convolutional layers (with the previouslayers’ feature map as input) then allow us to pro-gressively learn larger patterns in the spatial hierar-chy (such as specific parts of objects) [7].Pooling layers reduce the number of trainable pa-rameters in a CNN by aggressively downsamplingfeature maps, i.e., clustering neighbouring locationsof the input together using a summary statistic. Inthe case of max-pooling, the maximum value fromeach cluster is taken. Pooling produces feature mapsthat are approximately translation invariant to localchanges in an input [16].It is often easiest to think of convolutional andpooling layers in terms of the feature map shape(or tensor dimensions) they output, however, fully-connected layers are best considered in terms ofneurons. Each neuron may have many inputs( x , x , . . . , x n ) and one output y . Each input has aweight ( w , w , . . . , w n ) and a neuron may have bias w associated with another input x = 1 [25]. Theweights and bias are thought of as the (tunable) pa-rameters of each neuron. The neuron is activated bycomputing the linear combination of the inputs andweights/biases (i.e., linear activation). It is then fedinto a non-linear activation function f ( . ) to computeits output y . That is, a = n (cid:88) i =0 w i x i , (1) y = f ( a ) . (2)A fully-connected layer connects one layer of neu-rons to another. If there are n input neurons and m output neurons, the number of tuneable parametersfor that layer will be ( n + 1) × m .Perhaps the most challenging issue with fittingCNNs is the potential for over-fitting as there canbe millions of network parameters, and the algo-rithm may only memorize patterns in the trainingset and not be able to generalize these to previouslyunseen data presented in the test set . This is why itis important to monitor and tune a network using a validation set .In this paper, we implement an 11 layer CNN.The 11 layers of the network architecture is outlinedin Table I and is visualized in Figure 1. The inputlayer is a 3D tensor (image) with two spatial axes(width and height) and a depth axis of either one(for grayscale) or three (for RGB). Each convolu-tion layer will use windows of (3 ×
3) windows (withstride 1) and each max-pooling layer will downsam-ple by a factor of 2. At the 9th layer, we “flatten”the output feature map from the 8th layer to a 1Dvector with the same number of neurons, which thenallows us to use fully-connected layers, connectingeach neuron in the current layer to neurons in theprevious one.TABLE I: The CNN architecture. We use 11layers, first sequencing between convolution andmax-pooling layers of increasing depth. TheOutput Shape column is written as a 3D tensorwith indices (Height, Width, Depth). We thenflatten the output tensor from the 8th layer into a1D vector, followed by two fully-connected layers.It is easier to think of fully-connected layers interms of the number of output neurons. The finaloutput is a probability mass function for the C = 18 different EOS classes. Layer Type Output Shape Activation0 Input (256, 256, 3)1 Convolution (256, 256, 32) ReLU2 Max-Pooling (128, 128, 32)3 Convolution (128, 128, 64) ReLU4 Max-Pooling (64, 64, 64)5 Convolution (64, 64, 128) ReLU6 Max-Pooling (32, 32, 128)7 Convolution (32, 32, 128) ReLU8 Max-Pooling (16, 16, 128)Layer Type
The rectified linear unit (ReLU) is a non-linearactivation function used on many of the layers inthe network and is defined as f ( x ) = max(0 , x ) . (3) FIG. 1: The CNN architecture visualized. Thefeature map (output) produced by each layer is theinput into the next layer. Convolution and poolinglayers get progressively deeper. The height andwidth of the feature maps become smaller throughpooling.The softmax function is used as the final activa-tion, the output of which is an 18-dimensional vectorof probabilities for each image. This is defined asˆ p ( c ) i = exp( w Tc x ) (cid:80) Cc =1 exp( w Tc x ) , c = 1 , , . . . , C, (4)where x is the feature map from the previous layer, w c is the vector of weights connecting the the outputfrom the previous layer to class c , and C = 18 aswe have 18 different EOS we are classifying, and(ˆ p (1) i , ˆ p (2) i , . . . , ˆ p ( C ) i ) is the vector of probabilities forthe i th image.The loss function that we minimize is the categori-cal cross-entropy , which is commonly-used through-out multi-class classification problems. This is de-fined as L ( p, ˆ p ) = − N (cid:88) i =1 C (cid:88) c =1 p ( c ) i log ˆ p ( c ) i , (5)where N is the number of images in the training setand p ( c ) i = (cid:40) i belongs to class c, . (6)We use the RMSProp optimizer as our gradient de-scent routine. The CNN is implemented in
Python using the
Keras deep learning framework [7].
III. PREPROCESSING
We use the 1824 simulated rotating core collapseGW signals of Richers et al [29], and the data ispublicly available at [28].Each signal in the data set has a source distanceof 10 kpc from Earth. The data is originally sampledat 65535 Hz. We downsample the data to 4096 Hzas the maximum peak frequency in Richers et al [29]is 1051.4 Hz and according to Shannon-Nyquist the-orem, we need to sample at least two times the max-imum frequency we wish to resolve. We round up tothe nearest base-2 frequency to utilize the speed andefficiency of the fast Fourier transform (FFT).Before downsampling, we first multiply the time-domain data by a Tukey window with tapering pa-rameter α = 0 . t b = 0, where t b isthe time of core bounce, and restrict our attentionto the signal at times t ∈ [ t b − .
05 s , t b + 0 .
075 s],as this is where the most interesting dynamics of theGW signal occur.No noise (simulated or real) is added to the signalin this paper as our primary goal is to explore theGW signal dependence on the nuclear EOS.We need to produce the images to feed intothe CNN. We explore the data in three differentways; in the time-domain with the time series sig-nal, in the frequency-domain with the periodogram(squared modulus of Fourier coefficients), and intime-frequency space with a spectrogram.First, we create images of the time-domain data.We transform the data set so all signals are on theunit interval. We translate all signals by subtract-ing the minimum strain in the entire data set, andthen rescale by dividing by the maximum strain inthe entire data set. We plot the data, making sureto remove the axes, scales, ticks, and labels, as thesewill add unwanted noise in the image. We then saveeach image as a (256 × jpeg for-mat. An example of one of these time series imagesis illustrated in Figure 2.The second set of images are the periodograms ofthe GW signals. The squared modulus of the Fouriercoefficients is computed and then transformed to theunit interval by translating and rescaling as before. FIG. 2: 256 ×
256 pixel image of the time series ofthe 670th signal in the Richers et al [29] catalogue.This signal has the HShen EOS.The resulting frequency-domain representations areplotted (on the log scale) and saved in jpeg formatas before. The periodogram of the signal presentedin Figure 2 is displayed in Figure 3.FIG. 3: 256 ×
256 pixel image of the periodogramof the 670th signal in the Richers et al [29]catalogue. This signal has the HShen EOS.The third set of images are time-frequency mapsof the data. We generate the (256 ×
256 pixel jpeg )images by computing and plotting the spectrogram,which represents a signal’s power content over timeand frequency. We use a window length of 2 , anoverlap of 99%, and Tukey tapering parameter α =0 . scale, and power (colour)is normalized by dividing the power in each of thespectrograms by the maximum total power in thecatalogue to ensure images are all on the same scale.As before, axes, ticks, scales, and labels are removed.FIG. 4: 256 ×
256 image of the spectrogram of the670th signal in the Richers et al [29] catalogue.This signal has the HShen EOS.We then randomly shuffle the spectrogram imagessuch that ∼
70% are in the training set ( n training =1302), ∼
15% are in the validation set ( n validation =261), and ∼
15% are in the test set ( n test = 261).We run three separate CNNs (one each for thetime series images, periodogram images, and spec-trogram images) to explore visual patterns with thegoal of classifying nuclear EOS.The input depth for the time series and peri-odogram images is one grayscale colour channel,whereas for the spectrogram images, this is a threecolour RGB channel. IV. RESULTS
We measure the success of the three CNNs interms of the proportion of test signals that have thecorrect EOS classification, called the accuracy of thenetwork. In this study, we achieve 64% accuracy forthe spectrogram images, 65% for the periodogramimages, and 71% for the time series images.State-of-the-art CNNs can achieve accuracies ofup to 95-99% on every-day objects in computer vi-sion competitions such as those based on the Im-ageNet database [31]. This has been demonstratedeffectively the GW literature (see e.g., [15]). Thoughour achieved accuracy of 64–71% is lower than this,it is much higher than anticipated. As noted byRichers et al [29], the rotating core collapse GW sig-nal has only very weak dependence on nuclear EOS.Our results suggest that this could be upgraded to“moderate” dependence. What is also surprising isthe algorithm achieved this accuracy with a rela-tively small training data set ( n = 1302).Let us now consider the “top 5” EOS classifica-tions for each image. That is, the five EOS classeswith the highest probabilities for each image. Wecompute the cumulative proportion images in thetest set that are correctly classified within these top5 classes. The cumulative proportion of correct clas-sifications can be seen in Table II. Interestingly, theCNN trained on time series images outperforms theothers. For each CNN, the EOS class with the sec-ond highest probability is the correct classificationon more than 10% of the test signals, indicating thatwe can correctly classify the EOS within the top2 classes 75–84% of the time. For the time seriesCNN, we achieve 90% correct classifications withinthe top 3 EOS classes. We can can correctly con-strain the nuclear EOS to one in five classes (ratherthan one in 18) 97%, 93%, and 91% of the time forthe time series CNN, periodogram CNN, and spec-trogram CNN respectively. These results are encour-aging and demonstrate that we can constrain thenuclear EOS with reasonable accuracy.We run the CNN in batches of size 32 for 100epochs, making sure to monitor validation accuracyand loss. Surprisingly, over-fitting was not an is-sue with this data set, even though it is relativelysmall. No regularization, drop-out, or K -fold valida-tion was required. While training accuracy tendedtowards 100% as the number of epochs increased,validation accuracy remained reasonably constant atTABLE II: Cumulative proportion of correctclassifications. Time Series Periodogram Spectrogram1 0.71 0.65 0.642 0.85 0.77 0.753 0.91 0.85 0.834 0.93 0.90 0.855 0.97 0.93 0.91
V. CONCLUSIONS
This paper demonstrated a proof-of-concept thatrotating core collapse GW signals moderately de-pend on the nuclear EOS. We are encouraged by the71% correct classifications achieved when using theCNN framework to probe visual patterns in rotatingcore collapse GW signals. We are further encouraged by the 91–97% correct classifications after consider-ing the five EOS classes with the highest estimatedprobability for each test signal. With this in mind,we plan a follow-up study to explore further howthe feature maps of each layer can help understandexactly how each nuclear EOS influences the GWsignal.The goal of this paper was not to conduct param-eter estimation in the presence of noise, but more toexplore the dependence a rotating core collapse GWsignal has on the nuclear EOS. However, this is agoal of a future project, where we aim to add real orsimulated detector noise to see if we can constrainnuclear EOS under more realistic settings.The deep learning framework is becoming a forceof its own in the GW data analysis literature; allow-ing for near-instantaneous low-latency Bayesian pos-terior computations using pre-trained networks, pro-ducing accurate and efficient GW signal and glitchclassifications, and allowing us to solve problemspreviously thought impossible.
ACKNOWLEDGEMENTS
The author would like to thank Nelson Chris-tensen and Ollie Burke for fruitful discussions. [1] Aasi J, et al (2015) Advanced LIGO. Classical andQuantum Gravity 32(7):074,001, doi:10.1088/0264-9381/32/7/074001, URL https://doi.org/10.1088%2F0264-9381%2F32%2F7%2F074001 [2] Abbott B, et al (2020) Optically targeted searchfor gravitational waves emitted by core-collapse su-pernovae during the first and second observing runsof Advanced LIGO and Advanced Virgo. Phys RevD 101:084,002, doi:10.1103/PhysRevD.101.084002,URL https://link.aps.org/doi/10.1103/PhysRevD.101.084002 [3] Abdikamalov E, Gossan S, DeMaio AM, Ott CD(2014) Measuring the angular momentum distri-bution in core-collapse supernova progenitors withgravitational waves. Physical Review D 90:044,001[4] Acernese F, et al (2014) Advanced Virgo: asecond-generation interferometric gravitationalwave detector. Classical and Quantum Gravity32(2):024,001, doi:10.1088/0264-9381/32/2/024001,URL https://doi.org/10.1088%2F0264-9381%2F32%2F2%2F024001 [5] Astone P, Cerd´a-Dur´an P, Di Palma I, Drago M,Muciaccia F, Palomba C, Ricci F (2018) New method to observe gravitational waves emitted bycore collapse supernovae. Phys Rev D 98:122,002,doi:10.1103/PhysRevD.98.122002, URL https://link.aps.org/doi/10.1103/PhysRevD.98.122002 [6] Chan ML, Heng I, Messenger C (2019) De-tection and classification of supernova gravi-tational waves signals: A deep learning ap-proach. arXivorg URL http://search.proquest.com/docview/2331700621/ [7] Chollet F (2018) Deep Learning with Python. Man-ning Publications Co., Shelter Island, New York[8] Chua AJK, Vallisneri M (2020) LearningBayesian posteriors with neural networks forgravitational-wave inference. Phys Rev Lett124:041,102, doi:10.1103/PhysRevLett.124.041102,URL https://link.aps.org/doi/10.1103/PhysRevLett.124.041102 [9] Dimmelmeier H, Ott CD, Marek A, Janka HT(2008) The gravitational wave burst signal fromcore collapse of rotating stars. Physical Review D78:064,056[10] Dreissigacker C, Sharma R, Messenger C,Zhao R, Prix R (2019) Deep-learning con- tinuous gravitational waves. Phys Rev D100:044,009, doi:10.1103/PhysRevD.100.044009,URL https://link.aps.org/doi/10.1103/PhysRevD.100.044009 [11] Edwards MC, Meyer R, Christensen N (2014)Bayesian parameter estimation of core collapsesupernovae using gravitational wave simulations.Inverse Problems 30(11), doi:10.1088/0266-5611/30/11/114008[12] Gabbard H, Williams M, Hayes F, Messen-ger C (2018) Matching matched filteringwith deep networks for gravitational-waveastronomy. Phys Rev Lett 120:141,103, doi:10.1103/PhysRevLett.120.141103, URL https://link.aps.org/doi/10.1103/PhysRevLett.120.141103 [13] Gabbard H, Messenger C, Heng IS, Tonolini F,Murray-Smith R (2019) Bayesian parameter esti-mation using conditional variational autoencodersfor gravitational-wave astronomy. arXiv:190906296[astro-phIM][14] George D, Huerta E (2018) Deep learning forreal-time gravitational wave detection and pa-rameter estimation: Results with AdvancedLIGO data. Physics Letters B 778:64 – 70, doi:https://doi.org/10.1016/j.physletb.2017.12.053,URL [15] George D, Shen H, Huerta EA (2018) Classifica-tion and unsupervised clustering of LIGO data withdeep transfer learning. Phys Rev D 97:101,501,doi:10.1103/PhysRevD.97.101501, URL https://link.aps.org/doi/10.1103/PhysRevD.97.101501 [16] Goodfellow I, Bengio Y, Courville A (2016) DeepLearning. The MIT Press[17] Gossan SE, Sutton P, Stuver A, Zanolin M,Gill K, Ott CD (2016) Observing gravitationalwaves from core-collapse supernovae in the ad-vanced detector era. Phys Rev D 93:042,002,doi:10.1103/PhysRevD.93.042002, URL https://link.aps.org/doi/10.1103/PhysRevD.93.042002 [18] Green SR, Gair J (2020) Complete parame-ter inference for gw150914 using deep learning.arXiv:200803312 [astro-phIM][19] Green SR, Simpson C, Gair J (2020) Gravitational-wave parameter estimation with autoregressive neu-ral network flows. arXiv:200207656 [astro-phIM][20] Iess A, Cuoco E, Morawski F, Powell J (2020)Core-collapse supernova gravitational-wave searchand deep learning classification. Machine Learn-ing: Science and Technology 1(2):025,014, doi:10.1088/2632-2153/ab7d31, URL https://doi.org/10.1088%2F2632-2153%2Fab7d31 [21] Janka HT (2012) Explosion mechanismsof core-collapse supernovae. Annual Re- view of Nuclear and Particle Science62(1):407–451, doi:10.1146/annurev-nucl-102711-094901, URL https://doi.org/10.1146/annurev-nucl-102711-094901 ,https://doi.org/10.1146/annurev-nucl-102711-094901[22] Kuroda T, Kotake K, Hayama K, TakiwakiT (2017) Correlated signatures of gravitational-wave and neutrino emission in three-dimensionalgeneral-relativistic core-collapse supernova sim-ulations. The Astrophysical Journal 851(1):62,doi:10.3847/1538-4357/aa988d, URL https://doi.org/10.3847%2F1538-4357%2Faa988d [23] Lattimer JM (2012) The nuclear equa-tion of state and neutron star masses. An-nual Review of Nuclear and Particle Sci-ence 62(1):485–515, doi:10.1146/annurev-nucl-102711-095018, URL https://doi.org/10.1146/annurev-nucl-102711-095018 ,https://doi.org/10.1146/annurev-nucl-102711-095018[24] Logue J, Ott CD, Heng I, Kalmus P, Scargill JHC(2012) Inferring core-collapse supernova physicswith gravitational waves. Phys Rev D 86:044,023,doi:10.1103/PhysRevD.86.044023, URL https://link.aps.org/doi/10.1103/PhysRevD.86.044023 [25] MacKay DJC (2003) Information Theory, Infer-ence, and Learning Algorithms. Cambridge Univer-sity Press, USA[26] Powell J, Gossan SE, Logue J, Heng IS (2016)Inferring the core-collapse supernova explosionmechanism with gravitational waves. Phys Rev D94:123,012, doi:10.1103/PhysRevD.94.123012, URL https://link.aps.org/doi/10.1103/PhysRevD.94.123012 [27] Powell J, Szczepanczyk M, Heng IS (2017) In-ferring the core-collapse supernova explosionmechanism with three-dimensional gravitational-wave simulations. Phys Rev D 96:123,013, doi:10.1103/PhysRevD.96.123013, URL https://link.aps.org/doi/10.1103/PhysRevD.96.123013 [28] Richers S, Ott CD, Abdikamalov E, O’ConnorE, Sullivan C (2016) Equation of State Effectson Gravitational Waves from Rotating Core Col-lapse. doi:10.5281/zenodo.201145, URL https://doi.org/10.5281/zenodo.201145 [29] Richers S, Ott CD, Abdikamalov E, O’Connor E,Sullivan C (2017) Equation of state effects on gravi-tational waves from rotating core collapse. Phys RevD 95:063,019, doi:10.1103/PhysRevD.95.063019,URL https://link.aps.org/doi/10.1103/PhysRevD.95.063019 [30] R¨over C, Bizouard MA, Christensen N, Dim-melmeier H, Heng I, Meyer R (2009) Bayesianreconstruction of gravitational wave burst sig- nals from simulations of rotating stellar core col-lapse and bounce. Physical Review D - Particles,Fields, Gravitation and Cosmology 80(10), doi:10.1103/PhysRevD.80.102004[31] Russakovsky O, Deng J, Su H, Krause J, SatheeshS, Ma S, Huang Z, Karpathy A, Khosla A, Bern-stein M, Berg AC, Fei-Fei L (2015) ImageNet LargeScale Visual Recognition Challenge. InternationalJournal of Computer Vision (IJCV) 115(3):211–252,doi:10.1007/s11263-015-0816-y[32] Shen H, Huerta EA, Zhao Z, Jennings E, SharmaH (2019) Deterministic and Bayesian neural net-works for low-latency gravitational wave param-eter estimation of binary black hole mergers.arXiv:190301998 [gr-qc][33] Smith J, Gossett P (1984) A flexible sampling-rateconversion method. In: ICASSP ’84. IEEE Interna- tional Conference on Acoustics, Speech, and SignalProcessing, vol 9, pp 112–115[34] Somiya K (2012) Detector configuration ofKAGRA–the japanese cryogenic gravitational-wave detector. Classical and QuantumGravity 29(12):124,007, doi:10.1088/0264-9381/29/12/124007, URL https://doi.org/10.1088%2F0264-9381%2F29%2F12%2F124007 [35] Zevin M, Coughlin S, Bahaadini S, Besler E,Rohani N, Allen S, Cabero M, Crowston K,Katsaggelos AK, Larson SL, Lee TK, LintottC, Littenberg TB, Lundgren A, Østerlund C,Smith JR, Trouille L, Kalogera V (2017) GravitySpy: integrating Advanced LIGO detector char-acterization, machine learning, and citizen sci-ence. Classical and Quantum Gravity 34(6):064,003,doi:10.1088/1361-6382/aa5cea, URL[35] Zevin M, Coughlin S, Bahaadini S, Besler E,Rohani N, Allen S, Cabero M, Crowston K,Katsaggelos AK, Larson SL, Lee TK, LintottC, Littenberg TB, Lundgren A, Østerlund C,Smith JR, Trouille L, Kalogera V (2017) GravitySpy: integrating Advanced LIGO detector char-acterization, machine learning, and citizen sci-ence. Classical and Quantum Gravity 34(6):064,003,doi:10.1088/1361-6382/aa5cea, URL