Segmentation of spectroscopic images of the low solar atmosphere by the Self Organizing Map technique
aa r X i v : . [ a s t r o - ph . S R ] F e b MNRAS , 1–13 (2020) Preprint 24 February 2021 Compiled using MNRAS L A TEX style file v3.0
Segmentation of spectroscopic images of the low solar atmosphere by theSelf Organizing Map technique
F. Schilliro, ★ P. Romano, Osservatorio Astrofisico di Catania, INAF, via S. Sofia n.78, 95123 Catania, Italy
Accepted XXX. Received YYY; in original form ZZZ
ABSTRACT
We describe the application of Semantic Segmentation by using the Self Organizing Map technique to an high spatial and spectralresolution dataset acquired along the H 𝛼 line at 656.28 nm by the Interferometric Bi-dimensional Spectrometer installed at thefocus plane of the Dunn Solar Telescope. This machine learning approach allowed us to identify several features correspondingto the main structures of the solar photosphere and chromosphere. The obtained results show the capability and flexibility of thismethod to identifying and analyzing the fine structures which characterize the solar activity in the low atmosphere. This is a firstsuccessful application of the SOM technique to astrophysical data sets. Key words:
Sunspots – Chromosphere –Photosphere– Simulations – Data Analysis
With the rise in big data, machine learning has become particu-larly important for solving data science problems in different areas(Finance, Image processing, Computational biology, Automotive,Aerospace, Natural language processing) (Aggarwal 2018). Machinelearning algorithms and techniques allow scientists to automatizecomputer learning in order to make classifications, decisions andpredictions without the continuous interaction with systems underanalysis. In general machine learning, as well semantic segmenta-tion in particular, use two types of approaches: supervised learning,almost focusing on the ’deep learning’ methodologies, e.g. based onConvolutional Networks (Hausen et al. 2019; Shelhamer et al. 2017),and unsupervised ones. The former algorithms train a chain of neuralnetworks by processing a known input and correlated output data,so that it is possible to predict future outputs by learning from ’ex-periences’. Instead, less used unsupervised learning method (Barlow1989; Zeljko 2014), find hidden patterns or intrinsic structures insidethe input data, allowing data classification or segmentation in a non’a priori’ organized fashion.The application of machine learning techniques has also a greatpotentiality in the space weather field in general, and for solar imagesin particular, with the opportunity to identify new indicators for theforecast of eruptive events occurring on the Sun, such as flares andCoronal Mass Ejections (see Benz (2017)), and producing effects inthe near-Earth space environment, as well as at the Earth surface.In fact, the scientific community has developed a growing inter-est for the automatic identification of the signatures characterizingthe sources of disturbances whose radiative, field, and particle en-ergy directly impact the Earth (e.g., Barnes et al. 2016; Kim et al.2019; Korsós et al. 2019; Florios et al. 2018; Kontogiannis et al.2018; Falco et al. 2019). In particular, the characterization of so- ★ E-mail: [email protected] lar active regions allows to identify the topological configurationwhich are suitable for the occurrence of sudden, rapid, and intensesolar eruptions (e.g., Romano et al. 2015, 2018, 2019).In this paper, we consider for the first time the application of theSemantic Segmentation by using ’Self Organizing Maps’ (SOM)technique (Kohonen 1995), to high-resolution spectroscopic obser-vations of the low solar atmosphere. We used a data set taken duringan observing campaign carried out on May 2015 at the NSO/DunnSolar Telescope (DST). The data set showing a sunspot and its mainstructures visible at photospheric and chromospheric level is de-scribed in the next Section. We describe the technique applied to geta skeletrization of the main structures in Section 3. Section 4 reportsthe main results and the advantages of the application of this newtechnique to solar images. The last Section is dedicated to draw ourconclusions.
We used data acquired by the Interferometirc Bidimensional Spec-troscopic Instrument (IBIS; Cavallini 2006) which operated at theNSO/DST. The data set consists of high-resolution data, with a pixelscale of 0 . ′′ 𝛼 𝛼 line taken during the best seeingconditions of that day of observation.The original field-of-view (FOV) of IBIS data was 500 × × © F. Schilliro et al.
Multi-frame Blind Deconvolution (MFBD; Löfdahl 2002) technique.The MFBD allowed us to reduce the seeing degradation, achiving aspatial resolution of about 0 . ′′ 𝛼 line. In the spectral pointsnear the continuum (top left and bottom right panels of Figure 1)and in the wings of the spectral line (top right and bottom left panelsof Figure 1) we are able to identify the main fine structures of thesunspot and the surrounding environment typical of the photosphericlevel, such as the umbra, the penumbral filaments, a light bridge, thegranular and intergranular pattern (see Solanki 2003, for an overviewof these structures). Near the center of the H 𝛼 line (middle row panelsof Figure 1) we clearly see the chromospheric structures, such as thesuperpenumbra, some facular regions in the top right corner of theFOV, and a filament portion in the top left corner of the FOV (seeLeka & Metcalf 2003).In order to better describe the potentiality of the SOM techniquein the segmentation of our spectral dataset, we also considered somephysical parameters which can be deduced by the analysis of the H 𝛼 line profile. We reconstructed the profile of the H 𝛼 line in each spatialpixel by fitting the corresponding signals obtained in the monochro-matic images with a linear background and a Gaussian shaped line.Using this reconstructed profile we performed an estimation of theline depth (LD) by: ( 𝐹 𝑐 − 𝐹 𝑜 ) < 𝐼 > , (1)where 𝐹 𝑐 and 𝐹 𝑜 are the maximum and the minimum intensities ofthe line profile, taken in the continuum and in the centroid of thegaussian fit, respectively, while < 𝐼 > is the average intensity of thequiet Sun, measured in a box of 10 ×
10 pixels taken in the southernportion of the FOV, outside of the sunspot where few chromosphericstructures were visible. Using the full width at halpha maximum(FWHM) of the gaussian fit we estimated the spectral line width(LW). We also measured the value of velocity along the line of sight(LOS) in photosphere from the Doppler Shift (DS) of the centroidof the reconstructed line profiles in each spatial point with respect tothe reference wavelength (656.28 nm).
Self-Organizing Maps (SOM) is an important class of neural net-works capable of unsupervised autonomous learning from data,adapting according to rules of synaptic plasticity in response to exter-nal stimuli. SOM algorithms are inspired by neuro-biological studiesand indicate that different sensory inputs (motor, visual, auditory,etc.) are mapped onto corresponding areas of the cerebral cortex inan orderly fashion. This form of map, known as a topographic map,has two important properties:(i) At each stage of representation, or processing, each piece of incom-ing information is kept in its proper context/neighbourhood.(ii) Neurons dealing with closely related pieces of information are keptclose together so that they can interact via short synaptic connections.A particular kind of SOM known as a Kohonen Network (seeKohonen 1995, for an overview), belongs to the class of vector-coding algorithms; they offer a mapping topological which places afixed number of vectors (code words) in an input space oversized,thus allowing data compression, data clustering or data classification.This ability to extract information from the data and understand
Figure 1.
Sample of the spectral images acquired along the H 𝛼 line by IBISand used in our analysis. All the images have been acquired on May 18 at14:42 UT. how these group spontaneously into cluster, is allowed by a neuralstructure with a single computational layer arranged in rows andcolumns, in which each neuron is fully connected to other and to allthe source nodes in the input layer. The self-organization algorithmis composed in general of four major process that involves inputlayer, as source of data to be processed and neural fully connectedlattice, that adapts its structure and morphology in order to replicatethe input organization of data according to their neighborhood. Inour study, this lattice morphology is used to semantic grouping pixelin regions, the features, that are detected according to the distanceof the multi-spectral set of input data collected for each pixel andthe normalized random initial formatting of the net, as depicted inFigure 2.1. Initialization: Input pattern is a normalized data-cube composedby a K-dimensional vector of spectral values for all the N × M pixels,as described in Fig.2. All the K-dimensional connection weightsfor each neuron and for each input value, 𝑤 𝑗𝑖 , are initialized withnormalized random values. MNRAS , 1–13 (2020) hort title, max. 45 characters Figure 2.
Neural lattice of a Kohonen Network: the 𝑁 × 𝑁 fully connectedlattice compute K-spectra vector input layers as vector of spectral valuesassociated to each pixel of image under processing.
2. Competition: For each input pattern, the neurons computetheir respective values of a discriminant function which providesthe basis for competition. For the D=N × M pixel/neurons of K-dimensional spectral input space, discriminant function is calcu-lated as the spectral Euclidean distance between each input pat-tern ( 𝑥 = ( 𝑥 𝑖 : 𝑖 = , . . . , 𝐷 )) and each connection weights be-tween the input units i and the neurons j in the computation layer, ( 𝑤 𝑗𝑖 = 𝑥 𝑗𝑖 : 𝑗 = . . . 𝑁 × 𝑁 , 𝑖 = , . . . , 𝐷 ) , were ( 𝐿 = 𝑁 × 𝑁 ) is the neural lattice dimension. Δ 𝑖 ( ¯ 𝑥 ) = 𝐷 Õ 𝑖 = ( ¯ 𝑥 𝑖 − ¯ 𝑤 𝑗𝑖 ) (2)The neuron 𝐸 ( 𝑥 ) whose weight vector comes closest to the inputvector (i.e. is most similar to it) is declared the winner, so the con-tinuous input space can be mapped to the discrete output space ofneurons by a simple process of competition between the neurons.3. Cooperation: The winning neuron 𝐸 ( 𝑥 ) determines the spatiallocation of a topological neighbourhood of excited neurons, therebyproviding the basis for cooperation among neighbouring neurons,what in neuro-biological studies is the lateral interaction within aset of excited neurons. When one neuron fires, its closest neighbourstend to get excited more than those further away, and this decays withdistance according a gaussian law : 𝐺 𝛿,𝐸 ( 𝑥 ) = e − 𝑆 𝛿,𝐸 ( 𝑥 ) 𝜎 (3)where • 𝐸 ( 𝑥 ) is the winning neuron according 𝑥 input vector; • 𝛿 is the general index for neuron of the lattice; • S 𝛿,𝐸 ( 𝑥 ) is the lateral distance between 𝐸 ( 𝑥 ) and 𝛿 , calculatedas the Euclidean distance on each spectral K-dimensional vector; • 𝜎 is the size of topological neighbourhood, typically shrinking Figure 3.
Group of cooperating neurons by application of Gaussian topolog-ical neighbourhood operator. with time (expressed in number of iterations as well) as the function 𝜎 ( 𝑡 ) = e − 𝑡𝜏 , with 𝜎 and 𝜏 constant set by the neural networkdesigner and respectively the initial neural proximity parameter andthe mean time of activation persistence by proximity.Cooperation process is depicted in Figure 3, where is highlightedthe group of involved neurons by application of gaussian topologicalneighbourhood operator.4. AdaptationThe excited neurons decrease their individual values of the dis-criminant function in relation to the input pattern through suitableadjustment of the associated connection weights, such that the re-sponse of the winning neuron to the subsequent application of asimilar input pattern is enhanced. The adaptation of the SOM de-signed is the process responsible for the feature map formation, thatlinks the input data to the self-organized neuronal lattice. The pointof the topographic neighbourhood is that not only the winning neurongets its weights updated, but its neighbours will have their weightsupdated as well, although by not as much as the winner itself. Inpractice, the appropriate weight update equation in term of weightsupdating is Δ 𝑤 𝑗𝑖 = 𝜂 ( 𝑡 ) 𝐺 𝛿,𝐸 ( 𝑥 ) ( 𝑡 )( ¯ 𝑥 − ¯ 𝑤 𝑗𝑖 ) (4) 𝜂 ( 𝑡 ) = 𝜂 e − 𝑡𝜏𝑛 (5)Upon repeated presentations of the training data, the synaptic-weight vectors 𝑤 𝑗 tend to follow the distribution of the input vectorsbecause of the neighborhood updating. The algorithm therefore leadsto a topological ordering of the feature map in the input space in thesense that neurons that are adjacent in the lattice will tend to havesimilar synaptic-weight vectors, towards the input vector 𝑥 .The learning-rate parameter 𝜂 ( 𝑡 ) is responsible for speed of adap-tation process, should also be time varying, as indicated in equation MNRAS , 1–13 (2020)
F. Schilliro et al. (4), and is time shrinking according the learning ability 𝜂 and itspersistence in time 𝜏 𝑛 . The operation of image segmentation is basically the association ofeach pixel 𝑥 𝑖 of a frame, that represents a K-vector spectral data set,to a unique cluster 𝐶 𝑙 according to a method that in our case is aSOM. A Segmented region associated to each different cluster couldbe expressed as ‘feature’ 𝜌 𝑘,𝑖 as follows: 𝜌 𝑘,𝑖 = (cid:26) ∀ 𝑥 𝑖 ∈ 𝐶 𝑙 𝐶 𝑙 as | 𝐶 𝑙 | = 𝐿 Õ 𝑖 = 𝜌 𝑙 𝑖 In Cluster Analysis two strategies are possible in finding a pos-sible optimal data classification, a supervised and an unsupervisedapproach. While the first one is adopted when segmentation of re-gion in features is well specified and tested, the unsupervised one isa method in which the number of features is not previously specified,and it arises from data structure as a results of a data processing thatproduces different indices of clustering.In our case the number of clusters or ’features’ in not known’a priori’, because a class is directly connected to the presence ofa region of interest that could be in the tile under study or not.Therefore, it could be useful to find inside the data structure, whetheran optimal number of possible groups of data separated each othersexists , each of which corresponding to different regions composingthe image frame.For the purpose of discovering the number of independent regions,that becomes important in the stage of design the SOM network,different clustering indices and parameters were investigated, as theCalinski-Harabasz (CH) index (see Calinski 1984), the Silhouetteindex (see Rousseeuw et al. 1986), and Davies-Bouldin (DB) index(see Davies et al. 1979). In particular, DB has been already usedin literature for solar EUV data clustering (see Caballero 2013),confirming that is a good and fast algorithm which converges withthe analysed data. In our case, we found that DB index providedsame results in the classification of optimal number of independentregions considering not only the clustering of the N × M × K spectraldata-cube, but also the clustering of the N × M × 𝑁 = 𝑁 =
4, complete the overallarchitecture for the segmentation algorithm, that really needs some
Figure 4.
Davies-Bouldin index for multi-spectral (up) and parameters (bot-tom) datacube . trimmer for the values related to 𝜎 , 𝜏 , 𝜂 , 𝜏 𝑛 . This architecture isdepicted in Figure 5. The algorithm defined in Figure 5 runs after a phase of preparation ofthe input data cube which foresees the adaptation of the image to theappropriate size (in this case 350x700 pixels) and the normalizationof the brightness values of each pixel for each wavelength of the datacube. The SOM network is organized in such a way as to guaranteesegmentation in complementary regions, so that the results of theelaboration are two-dimensional maps, which display the features,i.e., semantically separated regions.One of the main advantage of the SOM technique is its flexibilityin the choice of the number of features. In fact, according to the needsof the user it is possible to segment the dataset with different levelof accuracy. As mentioned above, on the base of the DB index wedecided to consider the results obtained for 16 clusters (Figure 6) and
MNRAS , 1–13 (2020) hort title, max. 45 characters to describe in detail the different features found in this particular case.However, a different number of clusters, e.g., 9 (Figure 7), results inthe merging of some features obtained for a higher value (compareFigures 6 and 7).We named each feature by a progressive number from 1 to 16 andwe note that they are able to segment in great detail all the structuresvisible in the FOV at different wavelengths. Moreover, we found aninteresting accordance of the features not only with the spectral prop-erties of the segmented photospheric and chromospheric structures,but also with some of their physical parameters. For example, wecan see that features n. 1 and 2 identify the chromospheric faculaat North. The core of the facula is well segmented by feature n. 1,while feature n. 2 corresponds to the surrounding pattern, visible inthe core of the H 𝛼 line (see the middle panels of Figure 1), wherethe brightness of the facula is limited by the presence of the neigh-bouring fibrils (top left panel of Figure 8). In this case, we note thatthe differences between features n. 1 and n. 2 are not only in theirvalues of intensity in the core of the H 𝛼 line, but also in the differentvalues of the line depth (bottom left panel of Figure 8). Features n.1 and n. 2 correspond to LD greater than 0.8 and 1.2, respectively.Instead, we do not see any correlation of these features with LW (topright panel of Figure 8 and the LOS velocity (bottom right panel ofFigure 8) of the corresponding structures.The umbra of the sunspot is well segmented by features n. 4, 7and 8 (see the contours in Figure 9). Combining these three featureswe are able to cover the whole umbra, while considering only someof them we are able to identify different sub-regions, gradually moreextended. Features n. 4 corresponds to the umbra characterized by 𝐼 / < 𝐼 > < 1 . . 𝐼 / < 𝐼 > < 1 . . 𝐼 / < 𝐼 > < 1 .
5, respectively. LD does not show any significant variation inthe umbra (see the bottom left panel of Figure 9). Only LW and theLOS velocity seem to characterize these features. In particular, it isnoteworthy that feature n.7 corresponds to the portion of the umbrawith a velocity of about 0.5 km s − (bottom right panel of Figure 9.The penumbra of the sunspot is usually a more complicated struc-ture to be segmented, due to the strong inhomogeneities, in the az-imuthal direction as well as along the line-of-sight. It seems that thepenumbra consist of dark filaments flanked by lateral brightenings,but also we can see bright filaments formed by a few penumbralgrains that are radially aligned. Nevertheless, features n. 3, 6, 9, 10,11 and 12 are useful to identify the different parts of the penumbra.In particular, features n. 9 and 10 seem to be useful to discrimi-nate among dark and bright regions forming the fine structure of thepenumbra at photospheric level, while feature n. 11 and 12 identifythe superpenumbra at chomospheric level (see Figure 10). Takinginto account the physical properties corresponding to these features,we can see that the features n. 3, 6 ad 9 correspond to different levelsof LW, LD and LOS velocity, although due to the Evershed flow(Evershed 1909) we cannot associate specific values of LW and DSto each feature. Only the LD shows different levels for each feature,in fact features n. 3, 6 and 9 are characterized by LD around 0.4, 0.5and 0.7, respectively. For the superpenumbra we can see also thatfeatures n. 10, 11 and 12 correspond to different level of LW (the topright panel of Figure 11), LD (bottom right panel of Figure 11) andLOS velocity (bottom left panel of Figure 11).Features n. 13, 14 and 15 correspond to the fibrils observed inthe surrounding region around the sunspot, as depicted in Figure 12.In particular, feature n. 13 is characterized by a FWHM of the H 𝛼 line of about 2.0 Å, an LD of about 0.4 and a positive LOS velocity(around 1 km s − ). Instead, features n. 14 correspond to the portionof the fibrils with LW ∼ ∼ Figure 5.
Architecture for a Semantic Segmentation using Self OrganizingMap with a 4x4 computational neural lattice.
LOS velocity characterize features n. 15 (see bottom right panel ofFigure 12).Finally, feature n.16 can be considered to study the filament por-tions visible in the FOV. In fact, we clearly recognize the elongatedshape of a filament in the top left corner of the bottom right panel ofFigure 13. Instead, feature n. 5 seems to identify the brighter regionscorresponding to the footpoints of the filaments, the so called umbralfilaments, i.e., structures where the increase of the plasma emissioncan be due to the accumulation of plasma coming from higher solaratmospheric levels into the photosphere and the low chromosphere(see Guglielmino et al. 2017, 2019, for more details). This is con-firmed by the LOS velocity map (bottom right panel of Figure 13),where we see that the black contours (feature n. 5) correspond to thestronger red shifted areas with velocity lower than -8 km s − , whilethe red contours (feature n. 16), indicating the upper parts of thefilaments, are characterized by LOS velocities closer to 0 km s − . The use of Machine Learning methods becomes increasingly im-portant in the world of big data, for the ability to process data in amassive and intelligent way by increasingly performing processors.In this work, we applied for the first time the SOM method to highresolution images of the solar atmosphere. We exploited the mainadvantage of a bi-dimensional spectrometer based on Febry-Perotdevices, such as IBIS. Using high spectral and spatial resolutionimages taken at different wavelengths along the same spectral line(in our case the H 𝛼 line), we were able to identify different structuresof the solar atmosphere that depend on the plasma and magneticfield properties. The application of the SOM technique was perfectlysuitable to segment the different images and to isolate among thesethe structures for a subsequent analysis. We remark that this methodseems to obtain also a good correspondence between the features andthe physical properties of the identified structures. In fact, we foundsome differences in terms of physical or spectral properties (e.g.line core depth, line width and doppler shift) exhibited by featuressegmenting different portions of the same solar structure.The advantages of the algorithm described in this paper are cer-tainly manifold. First of all the developed algorithm is very efficient:using our dataset, formed by 17 images of 350 ×
700 pixels, itscomputation requires no pre-elaboration of data, took less than oneminute for a SOM calculation with 200 epochs, using an octa-core i9processor with 2.3 GHz clock. Moreover the algorithm can be paral-lelized and can be exported on heterogeneous calculation platformshaving greater computational efficiency (e.g., GPU, FPGA) than theplatform used to find the prototype results achieved. In addition,
MNRAS000
MNRAS000 , 1–13 (2020)
F. Schilliro et al.
Figure 6.
Feature lattice elaborated by 4x4 SOM and performing the SemanticSegmentation of 16 different regions. this algorithm can be implemented on front-end devices that allow acomplete automatic analysis of the data collected and its processingin an almost real-time regime (Hughes et al. 2019) , so monitoringof the regions of interest could be automatic and independent of theoperator. In comparison to other methods reported in literature, wenote that the weakness of this method concerns the different positionof a feature into the SOM lattice, mainly after the elaboration of sub-sequent datasets. This issue has some consequences when the userneeds to compare or elaborate features in different frames in order toperform their tracking or to measure the evolution of their physicalproperties. For this reason, other segmentation algorithms, like YetAnother Feature Tracking Algorithm (YAFTA, Welsch et al. 2004),provided certainly better performances for this scope. However, re-search activity in order to identify some topics of the feature and runa next cascade supervised and fast network is in progress.Moreover we believe that the segmentation carried out is very ac-curate and is based on the determination of possible clusters emerg-ing from the data itself. Unlike other methods (Aschwanden 2010;De Visscher et al. 2015; Schad 2017; Perez et. al. 2013; Zharkova,2005) the segmentation reveals all the possible structures in one runand it is verified on the basis of objective parameters that make thesegmented regions complementary and physically well connoted, forthis reason it is known as semantic segmentation.This work represents a first step towards the application of the SOM
Figure 7.
Feature lattice elaborated by 3x3 SOM and performing the SemanticSegmentation of 9 different regions. technique to astrophysical data sets. Taking into account that otherskeletonization methods are not able to identify so many structures atthe same time and that such a result could not be achieved by a simpleselection of several thresholds, we think that the SOM technique mayfind very useful applications to the analysis of this kind of data inthe context of the space weather field. In fact, we also plan to applythe same technique to other solar data. We think that its applicationto full disk images taken at different wavelengths correspondingto different layers of the solar atmosphere, e.g., images taken bythe Solar Dynamics Observatory (SDO; Pesnell et al. 2012), mayreveal further potentialities for this kind of unsupervised approach(Verbeeck, 2013).Finally, the possibility to follow the evolution of those featuresidentified by the SOM technique over several days of observationcould be a useful tool to find new parameters which are able toforecast the occurrence of eruptive events in solar active regions.
ACKNOWLEDGEMENTS
The authors wish to thank the DST staff for its support during the ob-serving campaigns. The research leading to these results has receivedfunding from the European Union’s Horizon 2020 research and inno-vation programme under grant agreement no. 824135 (SOLARNET
MNRAS , 1–13 (2020) hort title, max. 45 characters Figure 8.
Superimposition of the contours of the features corresponding to the facula over the spectral image taken at 656.274 nm. project). This work was supported by the Italian MIUR-PRIN 2017on Space Weather: impact on circumterrestrial environment of solaractivity and by the Space Weather Italian COmmunity (SWICO) Re-search Program. A special thanks to MOSAICo projects (Metodolo-gie Open Source per l’Automazione Industriale e delle procedure di CalcOlo in astofisica) funded by Italian MISE, Ministero SviluppoEconomico).
MNRAS , 1–13 (2020)
F. Schilliro et al.
Figure 9.
Superimposition of the contours of the features corresponding to the sunspot umbra over the spectral image taken at 656.134 nm.
DATA AVAILABILITY
The data set used in this Paper is available at http://ibis.oa-roma.inaf.it/IBISA/ , in the archive containing data acquired withIBIS during some observing campaigns. The archive IBIS-A has been realised in the framework of the FP7 SOLARNET3 High-resolution Solar Physics Network that aims at integrating the majorEuropean infrastructures in the field of high-resolution solar physics,as a step towards the realisation of the European Solar Telescope(Collados et al. 2017).
MNRAS , 1–13 (2020) hort title, max. 45 characters Figure 10.
Superimposition of the contours of the features corresponding to the sunspot penumbra over the spectral image taken at 656.274 nm.
REFERENCES
Aggarwal, C. C., 2018, Neural Networks and Deep Learning, SpringerAschwanden, M. J.,2010, Solar Physics,262, 235–275, DOI: 10.1007/s11207-009-9474-yBarnes, G., et al. 2016, ApJ, 829, 89 Barlow, H. B. (1989). Unsupervised learning, Neural Computation, 1,295–311. 147Benz, A. O. 2017, Living Reviews in Solar Physics, 14, 2Caballero, C., and M.C. Aranda, (2013), Sol. Phys., 283, 691–717, 2013,DOI: 10.1007/s11207-013-0239-2. MNRAS , 1–13 (2020) F. Schilliro et al.
Figure 11.
Superimposition of the contours of the features corresponding to the superpenumbra over the spectral image taken at 656.274 nm.Calinski, T., Harabasz, J.,1974, Communications in Statistics-Theory andMethods , 3 ,1Cavallini, F. 2006, Sol. Phys., 236, 415Cenzigler,C., Kerem Un, M.,2017, British Journal of Mathematics and Com-puter Science ,22(6), 1-13 Collados, M. et al., 2010, AN, 331, Issue 6, 615Davies, D.L., Bouldin D. W., IEEE Transactions on Pattern Analysis andMachine Intelligence. Vol. PAMI-1, No. 2, 1979, pp. 224–227De Visscher, R., J. Space Weather Space Clim., 5, A34 (2015), DOI:10.1051/swsc/2015033MNRAS , 1–13 (2020) hort title, max. 45 characters Figure 12.
Superimposition of the contours of the features corresponding to the fibrils over the spectral image taken at 656.274 nm.Evershed, J. 1909, Obs, 32, 291Falco, M., Costa, P., & Romano, P. 2019, JSWSC, 9, 22Florios, K., Kontogiannis, I., Park, S.-H., et al. 2018, Sol. Phys., 293, 2Hausen, R., Robertson , B.E., The Astrophysical Journal Supplement Series,Volume 248, Number 1 Guglielmino, S. L., Romano, P., & Zuccarello, F. 2017, ApJL, 846, L16Guglielmino, S. L., Romano, P., Ruiz Cubo, B., Zuccarello, F., & MurabitoM., 2019, ApJ, 880, 34Hughes, M.J.,Hsu,V.W.,Seaton,D.B.,Bain,H.M.,DarnelJ.M.,Krista,L., J. Space Weather SpaceMNRAS , 1–13 (2020) F. Schilliro et al.
Figure 13.
Superimposition of the contours of the features corresponding to the filaments over the spectral image taken at 656.274 nm..Clim.,9,A38,https://doi.org/10.1051/swsc/2019036Leka, K. D., & Metcalf, T. R. 2003, Sol. Phys., 212, 361Kim, T., Park, E., Lee, H., et al. 2019, Nature Astronomy, 3, 397Kohonen, T. 1997,Self-organizing maps , Heidelberg: Springer (second edi-tion: 1997) Kontogiannis, I., Georgoulis, M. K., Park, S.-H., & Guerra, J. A. 2018, Sol.Phys., 293, 6Korsós, M. B., Yang, S., & Erdélyi, R. 2019, JSWSC, 9, 6Löfdahl, M.G. 2002, SPIE, 4792, 146LPerez-Suarez,D., "Applied Signal and Image Processing: MultidisciplinaryMNRAS , 1–13 (2020) hort title, max. 45 characters Advancements" (ed. Rami Qahwaji, Roger Green and Evor L. Hines) pp.207-225, 10.4018/978-1-60960-477-6.ch013Pesnell, W. D., Thompson, B. J., & Chamberlin, P. C. 2012, Sol. Phys., 275,3Ritter, H., Kohonen, T. 1989, Biological Cybernetics, 61 , 241Romano, P., Zuccarello, F., Guglielmino, S. L., et al. 2015, A&A, 582, 55.Romano, P., Falco, M., Guglielmino, S. L., et al. 2017, ApJ, 837, 173Romano, P., Elmhamdi, A., Falco, M., et al. 2018, ApJL, 852, 1Romano, P., Elmhamdi, A., & Kordi, A., 2019, Sol. Phys., 294, 1Romano, P., Murabito, M., Guglielmino, S. L., et al. 2020, ApJ, 899, 129Rousseeuw,P. J., 1987, Journal of Computational and Applied Mathematics,20 (1987), 53-65Shelhamer,E., Long,J., Darrell, T., IEEE Trans. on Pattern Analysis and Ma-chine Intelligence ( Volume: 39 , Issue: 4 , April 1 2017 )Schad, T.,2017, Solar Physics, Volume 292, Issue 9, article id.132, 24 pp.,10.1007/s11207-017-1153-9Solanki, S. K.,2003, A&A Rv, 11, 153Verbeeck,C.,2013, Sol Phys , 283, 67–95Welsch, B. T. et al. 2004, ApJ, 610, 1148Zharkova, V., Ipson, S.S., BenkhalilS,A., Zharkov,S.,2005, Artificial Intelli-gence Review 23(3):209-266, DOI: 10.1007/s10462-004-4104-4Zeljko,I.,Connolly, A. J.,VanderPlas,J. T., Gray, A., 2014, Princeton Series inModern Observational Astronomy, Statistics, Data Mining and MachineLearning in AstronomyThis paper has been typeset from a TEX/L A TEX file prepared by the author. MNRAS000