Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where K. Nienartowicz is active.

Publication


Featured researches published by K. Nienartowicz.


Monthly Notices of the Royal Astronomical Society | 2011

Random forest automated supervised classification of Hipparcos periodic variable stars

P. Dubath; L. Rimoldini; Maria Süveges; J. Blomme; M. López; L. M. Sarro; J. De Ridder; J. Cuypers; L. P. Guy; I. Lecoeur; K. Nienartowicz; A. Jan; M. Beck; Nami Mowlavi; P. De Cat; Thomas Lebzelter; Laurent Eyer

We present an evaluation of the performance of an automated classification of the Hipparcos periodic variable stars into 26 types. The sub-sample with the most reliable variability types available in the literature is used to train supervised algorithms to characterize the type dependencies on a number of attributes. The most useful attributes evaluated with the random forest methodology include, in decreasing order of importance, the period, the amplitude, the V − I colour index, the absolute magnitude, the residual around the folded light-curve model, the magnitude distribution skewness and the amplitude of the second harmonic of the Fourier series model relative to that of the fundamental frequency. Random forests and a multistage scheme involving Bayesian network and Gaussian mixture methods lead to statistically equivalent results. In standard 10-fold cross-validation (CV) experiments, the rate of correct classification is between 90 and 100 per cent, depending on the variability type. The main mis-classification cases, up to a rate of about 10 per cent, arise due to confusion between SPB and ACV blue variables and between eclipsing binaries, ellipsoidal variables and other variability types. Our training set and the predicted types for the other Hipparcos periodic stars are available online.


Astronomy and Astrophysics | 2016

Gaia Data Release 1 - The Cepheid and RR Lyrae star pipeline and its application to the south ecliptic pole region

G. Clementini; V. Ripepi; S. Leccia; Nami Mowlavi; I. Lecoeur-Taibi; M. Marconi; László Szabados; Laurent Eyer; L. P. Guy; L. Rimoldini; G. Jevardat de Fombelle; B. Holl; G. Busso; Jonathan Charnas; J. Cuypers; F. De Angeli; J. De Ridder; J. Debosscher; D. W. Evans; P. Klagyivik; I. Musella; K. Nienartowicz; D. Ordonez; S. Regibo; M. Riello; L. M. Sarro; Maria Süveges

Context. The European Space Agency spacecraft Gaia is expected to observe about 10 000 Galactic Cepheids and over 100 000 Milky Way RR Lyrae stars (a large fraction of which will be new discoveries), during the five-year nominal lifetime spent scanning the whole sky to a faint limit of G = 20.7 mag, sampling their light variation on average about 70 times. Aims. We present an overview of the Specific Objects Study (SOS) pipeline developed within the Coordination Unit 7 (CU7) of the Data Processing and Analysis Consortium (DPAC), the coordination unit charged with the processing and analysis of variable sources observed by Gaia , to validate and fully characterise Cepheids and RR Lyrae stars observed by the spacecraft. The algorithms developed to classify and extract information such as the pulsation period, mode of pulsation, mean magnitude, peak-to-peak amplitude of the light variation, subclassification in type, multiplicity, secondary periodicities, and light curve Fourier decomposition parameters, as well as physical parameters such as mass, metallicity, reddening, and age (for classical Cepheids) are briefly described. Methods. The full chain of the CU7 pipeline was run on the time series photometry collected by Gaia during 28 days of ecliptic pole scanning law (EPSL) and over a year of nominal scanning law (NSL), starting from the general Variability Detection, general Characterization, proceeding through the global Classification and ending with the detailed checks and typecasting of the SOS for Cepheids and RR Lyrae stars (SOS Cep&RRL). We describe in more detail how the SOS Cep&RRL pipeline was specifically tailored to analyse Gaia ’s G -band photometric time series with a south ecliptic pole (SEP) footprint, which covers an external region of the Large Magellanic Cloud (LMC), and to produce results for confirmed RR Lyrae stars and Cepheids to be published in Gaia Data Release 1 ( Gaia DR1). Results. G -band time series photometry and characterisation by the SOS Cep&RRL pipeline (mean magnitude and pulsation characteristics) are published in Gaia DR1 for a total sample of 3194 variable stars (599 Cepheids and 2595 RR Lyrae stars), of which 386 (43 Cepheids and 343 RR Lyrae stars) are new discoveries by Gaia . All 3194 stars are distributed over an area extending 38 degrees on either side from a point offset from the centre of the LMC by about 3 degrees to the north and 4 degrees to the east. The vast majority are located within the LMC. The published sample also includes a few bright RR Lyrae stars that trace the outer halo of the Milky Way in front of the LMC.


Monthly Notices of the Royal Astronomical Society | 2012

Automated classification of Hipparcos unsolved variables

L. Rimoldini; P. Dubath; Maria Süveges; M. López; L. M. Sarro; J. Blomme; J. De Ridder; J. Cuypers; L. P. Guy; Nami Mowlavi; I. Lecoeur-Taibi; M. Beck; A. Jan; K. Nienartowicz; D. Ordóñez-Blanco; Thomas Lebzelter; Laurent Eyer

We present an automated classification of stars exhibiting periodic, non-periodic and irregular light variations. The Hipparcos catalogue of unsolved variables is employed to complement the training set of periodic variables of Dubath et al. with irregular and non-periodic representatives, leading to 3881 sources in total which describe 24 variability types. The attributes employed to characterize light-curve features are selected according to their relevance for classification. Classifier models are produced with random forests and a multistage methodology based on Bayesian networks, achieving overall misclassification rates under 12 per cent. Both classifiers are applied to predict variability types for 6051 Hipparcos variables associated with uncertain or missing types in the literature.


Monthly Notices of the Royal Astronomical Society | 2012

Search for high-amplitude δ Scuti and RR Lyrae stars in Sloan Digital Sky Survey Stripe 82 using principal component analysis

Maria Süveges; Branimir Sesar; Maria Varadi; Nami Mowlavi; Andrew Cameron Becker; Ž. Ivezić; M. Beck; K. Nienartowicz; L. Rimoldini; P. Dubath; Paul Bartholdi; Laurent Eyer

We propose a robust principal component analysis framework for the exploitation of multiband photometric measurements in large surveys. Period search results are improved using the time-series of the first principal component due to its optimized signal-to-noise ratio. The presence of correlated excess variations in the multivariate time-series enables the detection of weaker variability. Furthermore, the direction of the largest variance differs for certain types of variable stars. This can be used as an efficient attribute for classification. The application of the method to a subsample of Sloan Digital Sky Survey Stripe 82 data yielded 132 high-amplitude δ Scuti variables. We also found 129 new RR Lyrae variables, complementary to the catalogue of Sesar et al., extending the halo area mapped by Stripe 82 RR Lyrae stars towards the Galactic bulge. The sample also comprises 25 multiperiodic or Blazhko RR Lyrae stars.


Astronomy and Astrophysics | 2018

Gaia Data Release 2: Summary of the variability processing and analysis results

B. Holl; Marc Audard; K. Nienartowicz; G. Jevardat de Fombelle; O. Marchal; Nami Mowlavi; G. Clementini; J. De Ridder; D. W. Evans; L. P. Guy; A. C. Lanzafame; Thomas Lebzelter; L. Rimoldini; M. Roelens; Shay Zucker; Elisa Distefano; A. Garofalo; I. Lecoeur-Taibi; M. Lopez; R. Molinaro; T. Muraveva; A. Panahi; S. Regibo; V. Ripepi; L. M. Sarro; C. Aerts; Richard I. Anderson; J. Charnas; F. Barblan; S. Blanco-Cuaresma

Context. The Gaia Data Release 2 (DR2) contains more than half a million sources that are identified as variable stars. Aims: We summarise the processing and results of the identification of variable source candidates of RR Lyrae stars, Cepheids, long-period variables (LPVs), rotation modulation (BY Dra-type) stars, δ Scuti and SX Phoenicis stars, and short-timescale variables. In this release we aim to provide useful but not necessarily complete samples of candidates. Methods: The processed Gaia data consist of the G, GBP, and GRP photometry during the first 22 months of operations as well as positions and parallaxes. Various methods from classical statistics, data mining, and time-series analysis were applied and tailored to the specific properties of Gaia data, as were various visualisation tools to interpret the data. Results: The DR2 variability release contains 228 904 RR Lyrae stars, 11 438 Cepheids, 151 761 LPVs, 147 535 stars with rotation modulation, 8882 δ Scuti and SX Phoenicis stars, and 3018 short-timescale variables. These results are distributed over a classification and various Specific Object Studies tables in the Gaia archive, along with the three-band time series and associated statistics for the underlying 550 737 unique sources. We estimate that about half of them are newly identified variables. The variability type completeness varies strongly as a function of sky position as a result of the non-uniform sky coverage and intermediate calibration level of these data. The probabilistic and automated nature of this work implies certain completeness and contamination rates that are quantified so that users can anticipate their effects. Thismeans that even well-known variable sources can be missed or misidentified in the published data. Conclusions: The DR2 variability release only represents a small subset of the processed data. Future releases will include more variable sources and data products; however, DR2 shows the (already) very high quality of the data and great promise for variability studies.


Monthly Notices of the Royal Astronomical Society | 2015

A comparative study of four significance measures for periodicity detection in astronomical surveys

Maria Süveges; L. P. Guy; Laurent Eyer; Jan Cuypers; B. Holl; I. Lecoeur-Taibi; Nami Mowlavi; K. Nienartowicz; Diego Ordóñez Blanco; L. Rimoldini; Idoia Ruiz

We study the problem of periodicity detection in massive data sets of photometric or radial velocity time series, as presented by ESA’s Gaia mission. Periodicity detection hinges on the estimation of the false alarm probability (FAP) of the extremum of the periodogram of the time series. We consider the problem of its estimation with two main issues in mind. First, for a given number of observations and signal-to-noise ratio, the rate of correct periodicity detections should be constant for all realized cadences of observations regardless of the observational time patterns, in order to avoid sky biases that are dicult to assess. Second, the computational loads should be kept feasible even for millions of time series. Using the Gaia case, we compare the F M method (Paltani 2004; Schwarzenberg-Czerny 2012), the Baluev method (Baluev 2008) and ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●


Astronomy and Astrophysics | 2017

Gaia eclipsing binary and multiple systems: Two-Gaussian models applied to OGLE-III eclipsing binary light curves in the Large Magellanic Cloud ⋆

Nami Mowlavi; I. Lecoeur-Taibi; B. Holl; L. Rimoldini; F. Barblan; Andrej Prsa; A. Kochoska; Maria Süveges; Laurent Eyer; K. Nienartowicz; G. Jevardat; Jonathan Charnas; L. P. Guy; Marc Audard

The advent of large scale multi-epoch surveys raises the need for automated light curve (LC) processing. This is particularly true for eclipsing binaries (EBs), which form one of the most populated types of variable objects. The Gaia mission, launched at the end of 2013, is expected to detect of the order of few million EBs over a 5-year mission. We present an automated procedure to characterize EBs based on the geometric morphology of their LCs with two aims: first to study an ensemble of EBs on a statistical ground without the need to model the binary system, and second to enable the automated identification of EBs that display atypical LCs. We model the folded LC geometry of EBs using up to two Gaussian functions for the eclipses and a cosine function for any ellipsoidal-like variability that may be present between the eclipses. The procedure is applied to the OGLE-III data set of EBs in the Large Magellanic Cloud (LMC) as a proof of concept. The bayesian information criterion is used to select the best model among models containing various combinations of those components, as well as to estimate the significance of the components. Based on the two-Gaussian models, EBs with atypical LC geometries are successfully identified in two diagrams, using the Abbe values of the original and residual folded LCs, and the reduced


arXiv: Instrumentation and Methods for Astrophysics | 2014

Time series data mining for the Gaia variability analysis.

K. Nienartowicz; Diego Ordóñez Blanco; L. P. Guy; B. Holl; I. Lecoeur-Taibi; Nami Mowlavi; L. Rimoldini; Idoia Ruiz; Maria Süveges; Laurent Eyer

\chi^2


arXiv: Solar and Stellar Astrophysics | 2012

Hipparcos Variable Star Detection and Classification Efficiency

P. Dubath; I. Lecoeur-Taibi; L. Rimoldini; Maria Süveges; J. Blomme; M. López; L. M. Sarro; J. De Ridder; J. Cuypers; L. P. Guy; K. Nienartowicz; A. Jan; M. Beck; Nami Mowlavi; P. De Cat; T. Lebzelter; Laurent Eyer

. Cleaning the data set from the atypical cases and further filtering out LCs that contain non-significant eclipse candidates, the ensemble of EBs can be studied on a statistical ground using the two-Gaussian model parameters. For illustration purposes, we present the distribution of projected eccentricities as a function of orbital period for the OGLE-III set of EBs in the LMC, as well as the distribution of their primary versus secondary eclipse widths.


Archive | 2012

Data Management at Gaia Data Processing Centers

Pilar de Teodoro; Alexander Hutton; Benoit Frezouls; Alain Montmory; Jordi Portell; Rosario Messineo; M. Riello; K. Nienartowicz

This paper presents a Near-Real-Time multi-GPU accelerated solution of the ωk Algorithm for Synthetic Aperture Radar (SAR) data focusing, obtained in Stripmap SAR mode. Starting from an input raw data, the algorithm subdivides it in a grid of a configurable number of bursts along track. A multithreading CPU-side support is made available in order to handle each graphic device in parallel. Then each burst is assigned to a separate GPU and processed including Range Compression, Stolt Mapping via ChirpZ and Azimuth Compression steps. We prove the efficiency of our algorithm by using Sentinel-1 raw data (approx. 3.3 GB) on a commodity graphics card; the single-GPU solution is approximately 4x faster than the industrial multi-core CPU implementation (General ACS SAR Processor, GASP), without significant loss of quality. Using a multi-GPU system, the algorithm is approximately 6x faster with respect to the CPU processor.For decades, field help in case of disasters on the Earth’s surface - like floods, fires or earthquakes - is supported by the analysis of remotely sensed data. In recent years, the monitoring of vehicles, buildings or areas fraught with risk has become another major task for satellite-based crisis intervention. Since these scenarios are unforeseen and time-critical, they require a fast and well coordinated reaction. If useful information is extracted out of image data in realtime directly on board a spacecraft, the timespan between image acquisition and an appropriate reaction can be shortened significantly. Furthermore, on board image analysis allows data of minor interest, e.g. cloud-contaminated scenes, to be discarded and/or treated with lower priority, which leads to an optimized usage of storage and downlink capacity. This paper describes the modular application framework of VIMOS, an on board image processing experiment for remote sensing applications. Special focus will be on resource management, safety and modular commandability.Gaia is an ESA cornerstone mission, which was successfully launched December 2013 and commenced operations in July 2014. Within the Gaia Data Processing and Analysis consortium, Coordination Unit 7 (CU7) is responsible for the variability analysis of over a billion celestial sources and nearly 4 billion associated time series (photometric, spectrophotometric, and spectroscopic), encoding information in over 800 billion observations during the 5 years of the mission, resulting in a petabyte scale analytical problem. In this article, we briefly describe the solutions we developed to address the challenges of time series variability analysis: from the structure for a distributed data-oriented scientific collaboration to architectural choices and specific components used. Our approach is based on Open Source components with a distributed, partitioned database as the core to handle incrementally: ingestion, distributed processing, analysis, results and export in a constrained time window.The seamless mosaicing of massive very high resolution imagery addresses several aspects related to big data from space. Data volume is directly proportional to the size the input data, i.e., order of several TeraPixels for a continent. Data velocity derives from the fact that the input data is delivered over several years to meet maximum cloud contamination constraints with the considered satellites. Data variety results from the need to collect and integrate various ancillary data for cloud detection, land/sea mask delineation, and adaptive colour balancing. This paper details how these 3 aspects of big data are handled and illustrates them for the creation of a seamless pan-European mosaic from 2.5m imagery (Land Monitoring/Urban Atlas Copernicus CORE 03 data set).The current development of satellite imagery means that a great volume of images acquired globally has to be understood in a fast and precise manner. Processing this large quantity of information comes at the cost of finding unsupervised algorithms to fulfill these tasks. Change detection is one of the main issues when talking about the analysis of satellite image time series (SITS). In this paper, we propose a method to analyze changes in SITS based on binary descriptors and on the Hamming distance, regarded as a similarity metric. In order to render an automatic and completely unsupervised technique towards solving this problem, the obtained distances are quantized into change levels using the Lloyd-Max’s algorithm. The experiments are carried on 11 Landsat images at 30 meters spatial resolution, covering an area of approximately 59 × 51 km2 over the surroundings of Bucharest, Romania, and containing information from six subbands of frequency.The Euclid Archive System prototype is a functional information system which is used to address the numerous challenges in the development of fully functional data processing system for Euclid. The prototype must support the highly distributed nature of the Euclid Science Ground System, with Science Data Centres in at least eight countries. There are strict requirements both on data quality control and traceability of the data processing. Data volumes will be greater than 10 Pbyte, with the actual volume being dependent on the amount of reprocessing required.In the space domain, all scientific and technological developments are accompanied by a growth of the number of data sources. More specifically, the world of observation knows this very strong acceleration and the demand for information processing follows the same pace. To meet this demand, the problems associated with non-interoperability of data must be efficiently resolved upstream and without loss of information. We advocate the use of linked data technologies to integrate heterogeneous and schema-less data that we aim to publish in the 5 stars scale in order to foster their re-use. By proposing the 5 stars data model, Tim Berners-Lee drew the perfect roadmap for the production of high quality linked data. In this paper, we present a technological framework that allows to go from raw, scattered and heterogeneous data to structured data with a well-defined and agreed upon semantics, interlinked with other dataset for their common objects.Reference data sets, necessary to the advancement of the field of object recognition by providing a point of comparison for different algorithms, are prevalent in the field of multimedia. Although sharing the same basic object recognition problem, in the field of remote sensing there is a need for specialized reference data sets. This paper would like to open the topic for discussion, by taking a first attempt at creating a reference data set for a satellite image. In doing so, important differences between annotating photographic and satellite images are highlighted, along with their impact on the creation of a reference data set. The results are discussed with a view toward creating a future methodology for the manual annotation of satellite images.The future atmospheric composition Sentinel missions will generate two orders of magnitude more data than the current missions and the operational processing of these big data is a big challenge. The trace gas retrieval from remote sensing data usually requires high-performance radiative transfer model (RTM) simulations and the RTM are usually the bottleneck for the operational processing of the satellite data. To date, multi-core CPUs and also Graphical Processing Units (GPUs) have been used for highly intensive parallel computations. In this paper, we are comparing multi-core and GPU implementations of an RTM based on the discrete ordinate solution method. With GPUs, we have achieved a 20x-40x speed-up for the multi-stream RTM, and 50x speed-up for the two-stream RTM with respect to the original single-threaded CPU codes. Based on these performance tests, an optimal workload distribution scheme between GPU and CPU is proposed. Finally, we discuss the performance obtained with the multi-core-CPU and GPU implementations of the RTM.The effective use of Big Data in current and future scientific missions requires intelligent data handling systems which are able to interface the user to complicated distributed data collections. We review the WISE Concept of Scientific Information Systems and the WISE solutions for the storage and processing as applied to Big Data.Interactive visual data mining, where the user plays a key role in learning process, has gained high attention in data mining and human-machine communication. However, this approach needs Dimensionality Reduction (DR) techniques to visualize image collections. Although the main focus of DR techniques lays on preserving the structure of the data, the occlusion of images and inefficient usage of display space are their two main drawbacks. In this work, we propose to use Non-negative Matrix Factorization (NMF) to reduce the dimensionality of images for immersive visualization. The proposed method aims to preserve the structure of data and at the same time reduce the occlusion between images by defining regularization terms for NMF. Experimental validations performed on two sets of image collections show the efficiency of the proposed method in respect to controlling the trade-off between structure preserving and less occluded visualization.This article provides a short overview about the TanDEM-X mission, its objectives and the payload ground segment (PGS) based on data management, processing systems and long term archive. Due to the large data volume of the acquired and processed products a main challenge in the operation of the PGS is to handle the required data throughput, which is a new dimension for the DLR PGS. To achieve this requirement, several solutions were developed and coordinated. Some of them were more technical nature whereas others optimized the workflows.Clustering of Earth Observation (EO) images has gained a high amount of attention in remote sensing and data mining. Here, each image is represented by a high-dimensional feature vector which could be computed as the results of coding algorithms of extracted local descriptors or raw pixel values. In this work, we propose to learn the features using discriminative Nonnegative Matrix factorization (DNMF) to represent each image. Here, we use the label of some images to produce new representation of images with more discriminative property. To validate our algorithm, we apply the proposed algorithm on a dataset of Synthetic Aperture Radar (SAR) and compare the results with the results of state-of-the-art techniques for image representation. The results confirm the capability of the proposed method in learning discriminative features leading to higher accuracy in clustering.

Collaboration


Dive into the K. Nienartowicz's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

B. Holl

University of Geneva

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

D. W. Evans

University of Cambridge

View shared research outputs
Researchain Logo
Decentralizing Knowledge