Norman Mueller
Geoscience Australia
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Norman Mueller.
IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing | 2010
Fuqin Li; David L. B. Jupp; Shanti Reddy; Leo Lymburner; Norman Mueller; Peter Tan; Anisul Islam
Normalizing for atmospheric and land surface bidirectional reflectance distribution function (BRDF) effects is essential in satellite data processing. It is important both for a single scene when the combination of land covers, sun, and view angles create anisotropy and for multiple scenes in which the sun angle changes. As a consequence, it is important for inter-sensor calibration and comparison. Procedures based on physics-based models have been applied successfully with the Moderate Resolution Imaging Spectroradiometer (MODIS) data. For Landsat and other higher resolution data, similar options exist. However, the estimation of BRDF models using internal fitting is not available due to the smaller variation of view and solar angles and infrequent revisits. In this paper, we explore the potential for developing operational procedures to correct Landsat data using coupled physics-based atmospheric and BRDF models. The process was realized using BRDF shape functions derived from MODIS with the MODTRAN 4 radiative transfer model. The atmospheric and BRDF correction algorithm was tested for reflectance factor estimation using Landsat data for two sites with different land covers in Australia. The Landsat reflectance values had a good agreement with ground based spectroradiometer measurements. In addition, overlapping images from adjacent paths in Queensland, Australia, were also used to validate the BRDF correction. The results clearly show that the algorithm can remove most of the BRDF effect without empirical adjustment. The comparison between normalized Landsat and MODIS reflectance factor also shows a good relationship, indicating that cross calibration between the two sensors is achievable.
International Journal of Digital Earth | 2016
Adam Lewis; Leo Lymburner; Matthew B. J. Purss; Brendan P. Brooke; Benjamin J. K. Evans; Alex Ip; Arnold G. Dekker; James R. Irons; Stuart Minchin; Norman Mueller; Simon Oliver; Dale Roberts; Barbara Ryan; Medhavy Thankappan; Robert Woodcock; Lesley Wyborn
ABSTRACT The effort and cost required to convert satellite Earth Observation (EO) data into meaningful geophysical variables has prevented the systematic analysis of all available observations. To overcome these problems, we utilise an integrated High Performance Computing and Data environment to rapidly process, restructure and analyse the Australian Landsat data archive. In this approach, the EO data are assigned to a common grid framework that spans the full geospatial and temporal extent of the observations – the EO Data Cube. This approach is pixel-based and incorporates geometric and spectral calibration and quality assurance of each Earth surface reflectance measurement. We demonstrate the utility of the approach with rapid time-series mapping of surface water across the entire Australian continent using 27 years of continuous, 25 m resolution observations. Our preliminary analysis of the Landsat archive shows how the EO Data Cube can effectively liberate high-resolution EO data from their complex sensor-specific data structures and revolutionise our ability to measure environmental change.
international geoscience and remote sensing symposium | 2013
Peter Tan; Leo Lymburner; Norman Mueller; Fuqin Li; Medhavy Thankappan; Adam Lewis
The National Dynamic Land Cover Dataset (DLCD) classifies Australian land cover into 34 categories, which conform to 2007 International Standards Organisation (ISO) Land Cover Standard (19144-2). The DLCD has been developed by Geoscience Australia and the Australian Bureau of Agricultural and Resource Economics and Sciences (ABARES), aiming to provide nationally consistent land cover information to federal and state governments and general public. This paper describes the modeling procedure to generate the DLCD, including machine learning methodologies and time series analysis techniques involved in the process.
IEEE Transactions on Geoscience and Remote Sensing | 2017
Dale Roberts; Norman Mueller; Alexis McIntyre
High-quality and large-scale image composites are increasingly important for a variety of applications. Yet a number of challenges still exist in the generation of composites with certain desirable qualities such as maintaining the spectral relationship between bands, reduced spatial noise, and consistency across scene boundaries so that large mosaics can be generated. We present a new method for generating pixel-based composite mosaics that achieves these goals. The method, based on a high-dimensional statistic called the ‘geometric median,’ effectively trades a temporal stack of poor quality observations for a single high-quality pixel composite with reduced spatial noise. The method requires no parameters or expert-defined rules. We quantitatively assess its strengths by benchmarking it against two other pixel-based compositing approaches over Tasmania, which is one of the most challenging locations in Australia for obtaining cloud-free imagery.
Big Earth Data | 2017
Trevor Dhu; Bex Dunn; Ben Lewis; Leo Lymburner; Norman Mueller; Erin Telfer; Adam Lewis; Alexis McIntyre; Stuart Minchin; Claire Phillips
Abstract Petascale archives of Earth observations from space (EOS) have the potential to characterise water resources at continental scales. For this data to be useful, it needs to be organised, converted from individual scenes as acquired by multiple sensors, converted into “analysis ready data”, and made available through high performance computing platforms. Moreover, converting this data into insights requires integration of non-EOS data-sets that can provide biophysical and climatic context for EOS. Digital Earth Australia has demonstrated its ability to link EOS to rainfall and stream gauge data to provide insight into surface water dynamics during the hydrological extremes of flood and drought. This information is supporting the characterisation of groundwater resources across Australia’s north and could potentially be used to gain an understanding of the vulnerability of transport infrastructure to floods in remote, sparsely gauged regions of northern and central Australia.
Proceedings of the 2014 conference on Big Data from Space | 2014
Adam Lewis; Simon Oliver; Alex Ip; Steven Ring; Dale Roberts; Norman Mueller; Medhavy Thankappan; Matthew B. J. Purss
This paper presents a Near-Real-Time multi-GPU accelerated solution of the ωk Algorithm for Synthetic Aperture Radar (SAR) data focusing, obtained in Stripmap SAR mode. Starting from an input raw data, the algorithm subdivides it in a grid of a configurable number of bursts along track. A multithreading CPU-side support is made available in order to handle each graphic device in parallel. Then each burst is assigned to a separate GPU and processed including Range Compression, Stolt Mapping via ChirpZ and Azimuth Compression steps. We prove the efficiency of our algorithm by using Sentinel-1 raw data (approx. 3.3 GB) on a commodity graphics card; the single-GPU solution is approximately 4x faster than the industrial multi-core CPU implementation (General ACS SAR Processor, GASP), without significant loss of quality. Using a multi-GPU system, the algorithm is approximately 6x faster with respect to the CPU processor.For decades, field help in case of disasters on the Earth’s surface - like floods, fires or earthquakes - is supported by the analysis of remotely sensed data. In recent years, the monitoring of vehicles, buildings or areas fraught with risk has become another major task for satellite-based crisis intervention. Since these scenarios are unforeseen and time-critical, they require a fast and well coordinated reaction. If useful information is extracted out of image data in realtime directly on board a spacecraft, the timespan between image acquisition and an appropriate reaction can be shortened significantly. Furthermore, on board image analysis allows data of minor interest, e.g. cloud-contaminated scenes, to be discarded and/or treated with lower priority, which leads to an optimized usage of storage and downlink capacity. This paper describes the modular application framework of VIMOS, an on board image processing experiment for remote sensing applications. Special focus will be on resource management, safety and modular commandability.Gaia is an ESA cornerstone mission, which was successfully launched December 2013 and commenced operations in July 2014. Within the Gaia Data Processing and Analysis consortium, Coordination Unit 7 (CU7) is responsible for the variability analysis of over a billion celestial sources and nearly 4 billion associated time series (photometric, spectrophotometric, and spectroscopic), encoding information in over 800 billion observations during the 5 years of the mission, resulting in a petabyte scale analytical problem. In this article, we briefly describe the solutions we developed to address the challenges of time series variability analysis: from the structure for a distributed data-oriented scientific collaboration to architectural choices and specific components used. Our approach is based on Open Source components with a distributed, partitioned database as the core to handle incrementally: ingestion, distributed processing, analysis, results and export in a constrained time window.The seamless mosaicing of massive very high resolution imagery addresses several aspects related to big data from space. Data volume is directly proportional to the size the input data, i.e., order of several TeraPixels for a continent. Data velocity derives from the fact that the input data is delivered over several years to meet maximum cloud contamination constraints with the considered satellites. Data variety results from the need to collect and integrate various ancillary data for cloud detection, land/sea mask delineation, and adaptive colour balancing. This paper details how these 3 aspects of big data are handled and illustrates them for the creation of a seamless pan-European mosaic from 2.5m imagery (Land Monitoring/Urban Atlas Copernicus CORE 03 data set).The current development of satellite imagery means that a great volume of images acquired globally has to be understood in a fast and precise manner. Processing this large quantity of information comes at the cost of finding unsupervised algorithms to fulfill these tasks. Change detection is one of the main issues when talking about the analysis of satellite image time series (SITS). In this paper, we propose a method to analyze changes in SITS based on binary descriptors and on the Hamming distance, regarded as a similarity metric. In order to render an automatic and completely unsupervised technique towards solving this problem, the obtained distances are quantized into change levels using the Lloyd-Max’s algorithm. The experiments are carried on 11 Landsat images at 30 meters spatial resolution, covering an area of approximately 59 × 51 km2 over the surroundings of Bucharest, Romania, and containing information from six subbands of frequency.The Euclid Archive System prototype is a functional information system which is used to address the numerous challenges in the development of fully functional data processing system for Euclid. The prototype must support the highly distributed nature of the Euclid Science Ground System, with Science Data Centres in at least eight countries. There are strict requirements both on data quality control and traceability of the data processing. Data volumes will be greater than 10 Pbyte, with the actual volume being dependent on the amount of reprocessing required.In the space domain, all scientific and technological developments are accompanied by a growth of the number of data sources. More specifically, the world of observation knows this very strong acceleration and the demand for information processing follows the same pace. To meet this demand, the problems associated with non-interoperability of data must be efficiently resolved upstream and without loss of information. We advocate the use of linked data technologies to integrate heterogeneous and schema-less data that we aim to publish in the 5 stars scale in order to foster their re-use. By proposing the 5 stars data model, Tim Berners-Lee drew the perfect roadmap for the production of high quality linked data. In this paper, we present a technological framework that allows to go from raw, scattered and heterogeneous data to structured data with a well-defined and agreed upon semantics, interlinked with other dataset for their common objects.Reference data sets, necessary to the advancement of the field of object recognition by providing a point of comparison for different algorithms, are prevalent in the field of multimedia. Although sharing the same basic object recognition problem, in the field of remote sensing there is a need for specialized reference data sets. This paper would like to open the topic for discussion, by taking a first attempt at creating a reference data set for a satellite image. In doing so, important differences between annotating photographic and satellite images are highlighted, along with their impact on the creation of a reference data set. The results are discussed with a view toward creating a future methodology for the manual annotation of satellite images.The future atmospheric composition Sentinel missions will generate two orders of magnitude more data than the current missions and the operational processing of these big data is a big challenge. The trace gas retrieval from remote sensing data usually requires high-performance radiative transfer model (RTM) simulations and the RTM are usually the bottleneck for the operational processing of the satellite data. To date, multi-core CPUs and also Graphical Processing Units (GPUs) have been used for highly intensive parallel computations. In this paper, we are comparing multi-core and GPU implementations of an RTM based on the discrete ordinate solution method. With GPUs, we have achieved a 20x-40x speed-up for the multi-stream RTM, and 50x speed-up for the two-stream RTM with respect to the original single-threaded CPU codes. Based on these performance tests, an optimal workload distribution scheme between GPU and CPU is proposed. Finally, we discuss the performance obtained with the multi-core-CPU and GPU implementations of the RTM.The effective use of Big Data in current and future scientific missions requires intelligent data handling systems which are able to interface the user to complicated distributed data collections. We review the WISE Concept of Scientific Information Systems and the WISE solutions for the storage and processing as applied to Big Data.Interactive visual data mining, where the user plays a key role in learning process, has gained high attention in data mining and human-machine communication. However, this approach needs Dimensionality Reduction (DR) techniques to visualize image collections. Although the main focus of DR techniques lays on preserving the structure of the data, the occlusion of images and inefficient usage of display space are their two main drawbacks. In this work, we propose to use Non-negative Matrix Factorization (NMF) to reduce the dimensionality of images for immersive visualization. The proposed method aims to preserve the structure of data and at the same time reduce the occlusion between images by defining regularization terms for NMF. Experimental validations performed on two sets of image collections show the efficiency of the proposed method in respect to controlling the trade-off between structure preserving and less occluded visualization.This article provides a short overview about the TanDEM-X mission, its objectives and the payload ground segment (PGS) based on data management, processing systems and long term archive. Due to the large data volume of the acquired and processed products a main challenge in the operation of the PGS is to handle the required data throughput, which is a new dimension for the DLR PGS. To achieve this requirement, several solutions were developed and coordinated. Some of them were more technical nature whereas others optimized the workflows.Clustering of Earth Observation (EO) images has gained a high amount of attention in remote sensing and data mining. Here, each image is represented by a high-dimensional feature vector which could be computed as the results of coding algorithms of extracted local descriptors or raw pixel values. In this work, we propose to learn the features using discriminative Nonnegative Matrix factorization (DNMF) to represent each image. Here, we use the label of some images to produce new representation of images with more discriminative property. To validate our algorithm, we apply the proposed algorithm on a dataset of Synthetic Aperture Radar (SAR) and compare the results with the results of state-of-the-art techniques for image representation. The results confirm the capability of the proposed method in learning discriminative features leading to higher accuracy in clustering.
Remote Sensing of Environment | 2016
Norman Mueller; Adam Lewis; Dale Roberts; S. Ring; R. Melrose; Joshua Sixsmith; Leo Lymburner; Alexis McIntyre; Peter Tan; S. Curnow; Alex Ip
Remote Sensing of Environment | 2012
Fuqin Li; David L. B. Jupp; Medhavy Thankappan; Leo Lymburner; Norman Mueller; Adam Lewis; Alex Held
Archive | 2011
Juan Pablo Guerschman; Garth Warren; Guy Byrne; Leo Lymburner; Norman Mueller; Albert Van Dijk
Remote Sensing of Environment | 2017
Adam Lewis; Simon Oliver; Leo Lymburner; Ben Evans; Lesley Wyborn; Norman Mueller; Gregory Raevksi; Jeremy Hooke; Rob Woodcock; Joshua Sixsmith; Wenjun Wu; Peter Tan; Fuqin Li; Brian D. Killough; Stuart Minchin; Dale Roberts; Damien Ayers; Biswajit Bala; John L. Dwyer; Arnold G. Dekker; Trevor Dhu; Andrew Hicks; Alex Ip; Matt Purss; Clare Richards; Stephen Sagar; Claire Trenham; Peter Wang; Lan-Wei Wang