Daniel Selva
Cornell University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Daniel Selva.
IEEE Transactions on Geoscience and Remote Sensing | 2005
Mercè Vall-Llossera; Adriano Camps; Ignasi Corbella; Francesc Torres; Nuria Duffo; Alessandra Monerris; Roberto Sabia; Daniel Selva; Carmen Antolin; Ernesto Lopez-Baeza; Joan Ferran Ferrer; Kauzar Saleh
The goal of the Soil Moisture and Ocean Salinity mission over land is to infer surface soil moisture from multiangular L-band radiometric measurements. As the canopy affects the microwave emission of land, it is necessary to characterize different vegetation layers. This paper presents the Reference Pixel L-Band Experiment (REFLEX), carried out in June-July 2003 at the Vale/spl grave/ncia Anchor Station, Spain, to study the effects of grapevines on the soil emission and on the soil moisture retrieval. A wide range of soil moisture (SM), from saturated to completely dry soil, was measured with the Universitat Polite/spl grave/cnica de Catalunyas L-band Automatic Radiometer (LAURA). Concurrently with the radiometric measurements, the gravimetric soil moisture, temperature, and roughness were measured, and the vines were fully characterized. The opacity and albedo of the vineyard have been estimated and found to be independent on the polarization. The /spl tau/--/spl omega/ model has been used to retrieve the SM and the vegetation parameters, obtaining a good accuracy for incidence angles up to 55/spl deg/. Algorithms with a three-parameter optimization (SM, albedo albedo, and opacity) exhibit a better performance than those with one-parameter optimization (SM).
ieee aerospace conference | 2013
Daniel Selva; Edward F. Crawley
A key step of the mission development process is the selection of a system architecture, i.e., the layout of the major high-level system design decisions. This step typically involves the identification of a set of candidate architectures and a cost-benefit analysis to compare them. Computational tools have been used in the past to bring rigor and consistency into this process. These tools can automatically generate architectures by enumerating different combinations of decisions and options. They can also evaluate these architectures by applying cost models and simplified performance models. Current performance models are purely quantitative tools that are best fit for the evaluation of the technical performance of mission design. However, assessing the relative merit of a system architecture is a much more holistic task than evaluating performance of a mission design. Indeed, the merit of a system architecture comes from satisfying a variety of stakeholder needs, some of which are easy to quantify, and some of which are harder to quantify (e.g., elegance, scientific value, political robustness, flexibility). Moreover, assessing the merit of a system architecture at these very early stages of design often requires dealing with a mix of: a) quantitative and semi-qualitative data; objective and subjective information. Current computational tools are poorly suited for these purposes. In this paper, we propose a general methodology that can used to assess the relative merit of several candidate system architectures under the presence of objective, subjective, quantitative, and qualitative stakeholder needs. The methodology called VASSAR (Value ASsessment for System Architectures using Rules). The major underlying assumption of the VASSAR methodology is that the merit of a system architecture can assessed by comparing the capabilities of the architecture with the stakeholder requirements. Hence for example, a candidate architecture that fully satisfies all critical stakeholder requirements is a good architecture. The assessment process is thus fundamentally seen as a pattern matching process where capabilities match requirements, which motivates the use of rule-based expert systems (RBES). This paper describes the VASSAR methodology and shows how it can be applied to a large complex space system, namely an Earth observation satellite system. Companion papers show its applicability to the NASA space communications and navigation program and the joint NOAA-DoD NPOESS program.
ieee aerospace conference | 2010
Daniel Selva; Edward F. Crawley
When designing Earth observation missions, it is essential to take into account the programmatic context. Considering individual missions as part of a whole enables overall program optimization, which may bring important cost reductions and scientific and societal benefits.12
Journal of Spacecraft and Rockets | 2014
Daniel Selva; Bruce G. Cameron; Edward F. Crawley
This paper presents a methodology to explore the architectural trade space of Earth observing satellite systems, and applies it to the Earth Science Decadal Survey. The architecting problem is formulated as a combinatorial optimization problem with three sets of architectural decisions: instrument selection, assignment of instruments to satellites, and mission scheduling. A computational tool was created to automatically synthesize architectures based on valid combinations of options for these three decisions and evaluate them according to several figures of merit, including satisfaction of program requirements, data continuity, affordability, and proxies for fairness, technical, and programmatic risk. A population-based heuristic search algorithm is used to search the trade space. The novelty of the tool is that it uses a rule-based expert system to model the knowledge-intensive components of the problem, such as scientific requirements, and to capture the nonlinear positive and negative interactions bet...
ieee aerospace conference | 2014
Marc Sanchez; Daniel Selva; Bruce G. Cameron; Edward F. Crawley; Antonios Seas; Bernie Seery
NASA is currently conducting an architecture study for the next-generation Space Communication and Navigation system. This is an extremely complex problem with a variety of options in terms of band selection (RF, from S-band to Ka-band and beyond, or optical), network type (bent-pipe, circuit-switched, or packet-switched), fractionation strategies (monolithic, mother-daughters, homogeneous fractionation), orbit and constellation design (GEO/MEO/LEO, number of planes, number of satellites per plane), and so forth. When all the combinations are considered, the size of the tradespace grows to several millions of architectures. The ability of these architectures to meet the requirements from different user communities and other stakeholders (e.g., regulators, international partners) needs to be assessed. In this context, a computational tool was developed to enable the exploration of such large space of architectures in terms of both performance and cost. A preliminary version of this tool was presented in a paper last year. This paper describes an updated version of the tool featuring a higher-fidelity, rule-based scheduling algorithm, as well as several modifications in the architecture enumeration and cost models. It also discusses the validation results for the tool using real TDRSS data, as well as the results and sensitivity analyses for several forward-looking scenarios. Particular emphasis is put on families of architectures that are of interest to NASA, namely TDRSS-like architectures, architectures based on hosted payloads, and highly distributed architectures.
ieee aerospace conference | 2013
Marc Sanchez; Daniel Selva; Bruce G. Cameron; Edward F. Crawley; Antonios Seas; Bernie Seery
NASAs Space Communication and Navigation (SCaN) Program is responsible for providing communication and navigation services to space missions and other users in and beyond low Earth orbit. The current SCaN architecture consists of three independent networks: the Space Network (SN), which contains the TDRS relay satellites in GEO; the Near Earth Network (NEN), which consists of several NASA owned and commercially operated ground stations; and the Deep Space Network (DSN), with three ground stations in Goldstone, Madrid, and Canberra. The first task of this study is the stakeholder analysis. The goal of the stakeholder analysis is to identify the main stakeholders of the SCaN system and their needs. Twenty-one main groups of stakeholders have been identified and put on a stakeholder map. Their needs are currently being elicited by means of interviews and an extensive literature review. The data will then be analyzed by applying Cameron and Crawleys stakeholder analysis theory, with a view to highlighting dominant needs and conflicting needs. The second task of this study is the architectural tradespace exploration of the next generation TDRSS. The space of possible architectures for SCaN is represented by a set of architectural decisions, each of which has a discrete set of options. A computational tool is used to automatically synthesize a very large number of possible architectures by enumerating different combinations of decisions and options. The same tool contains models to evaluate the architectures in terms of performance and cost. The performance model uses the stakeholder needs and requirements identified in the previous steps as inputs, and it is based in the VASSAR methodology presented in a companion paper. This paper summarizes the current status of the MIT SCaN architecture study. It starts by motivating the need to perform tradespace exploration studies in the context of relay data systems through a description of the history NASAs space communication networks. It then presents the generalities of possible architectures for future space communication and navigation networks. Finally, it describes the tools and methods being developed, clearly indicating the architectural decisions that have been taken into account as well as the systematic approach followed to model them. The purpose of this study is to explore the SCaN architectural tradespace by means of a computational tool. This paper describes the tool, while the tradespace exploration is underway.
ieee aerospace conference | 2013
L. Dyrud; Jonathan Fentzke; Gary S. Bust; Bob Erlandson; Sally Whitely; Brian Bauer; Steve Arnold; Daniel Selva; Kerri Cahoy; R. L. Bishop; Warren J. Wiscombe; Steven Lorentz; Stefan Slagowski; Brian Christopher Gunter; Kevin E. Trenberth
GEOScan is a proposed space-based facility of globally networked instruments that will provide revolutionary, massively dense global geosciences observations. Major scientific research projects are typically conducted using two approaches: community facilities, and investigator lead focused missions. While science from space is almost exclusively conducted within the mission model, GEOScan is a new concept designed as a constellation facility from space utilizing a suite of space-based sensors that optimizes the scientific value across the greatest number of scientific disciplines in the earth and geosciences, while constraining cost and accommodation related parameters. Our grassroots design processes target questions that have not, and will not be answered until simultaneous global measurements are made. The relatively small size, mass, and power of the GEOScan instruments make them an ideal candidate for a hosted payload aboard a global constellation of communication satellites, such as the Iridium NEXTs 66-satellite constellation. This paper will focus on the design and planning components of this new type of heterogeneous, multi-node facility concept, such as: costing, design for manufacture, science synergy, and operations of this non-traditional mission concept. We will demonstrate that this mission design concept has distinct advantages over traditional monolithic satellite missions for a number of scientific measurement priorities and data products due to the constellation configuration, scaled manufacturing and facility model.
ieee aerospace conference | 2014
Daniel Selva
Large aerospace organizations typically spend significant resources in optimizing system design by means of specialized software (e.g., computer-aided design, simulation). Conversely, architectural decisions are made much faster, with far less resources, and typically, in a much less structured way. This is despite clear indications that architectural decisions fix many design decisions, and have a larger impact on lifecycle cost and performance than design decisions. The reason for this is that the architecting process is a much harder problem than the design process, due to large and varied sources of uncertainty, and humans usually perform much better than machines in this kind of ill-posed problems. In fact, many attempts to automate the architecture trade-space exploration process have failed due to these difficulties. Thus, current architecting practices are in the two extremes of the human-machine task allocation continuum. They either put too much weight on the human, who is subject to bias and computational limitations, or too much weight on the machine, which lacks the ability to make complex judgments based on common sense and prior knowledge. A more optimized task allocation that synergistically exploits the advantages of humans and computers is needed for system architecting. We call this new paradigm “knowledge-intensive system architecting”. A recent effort in this direction was the demonstration of the use of rule-based systems to improve the architecture evaluation process in the context of automatic trade-space exploration. This is an example of using expert knowledge to improve the performance of an essentially automated process (with the goal of supporting a currently unstructured 100% human process). In this paper, opportunities for synergistic human-machine collaboration in architecture optimization, as opposed to architecture evaluation, are discussed. We compare the outcomes of a multi-objective architecture optimization process between a 100% automatic process represented by a genetic algorithm with generic operators (crossover and mutation) and a hybrid process where human input is used to guide the search. In a first experiment, the expert knowledge is introduced in the form of heuristics (e.g., add synergistic instrument, remove interfering instrument). We compare the performance of these knowledge-intensive heuristics with that of simple crossover, and random search (as a control). In a more interactive second experiment, the human is presented with a few architectures during the search process and is asked to propose an improved version of each architecture. We compare the performance of the search with and without the human input. The examples used for these experiments concern the architecting process of Earth observing satellite systems, for which the authors have developed a system architecture toolkit throughout the last five years.
IEEE Transactions on Evolutionary Computation | 2017
Nozomi Hitomi; Daniel Selva
Adaptive operator selection (AOS) is a high-level controller for an optimization algorithm that monitors the performance of a set of operators with a credit assignment strategy and adaptively applies the high performing operators with an operator selection strategy. AOS can improve the overall performance of an optimization algorithm across a wide range of problems, and it has shown promise on single-objective problems where defining an appropriate credit assignment that assesses an operator’s impact is relatively straightforward. However, there is currently a lack of AOS for multiobjective problems (MOPs) because defining an appropriate credit assignment is nontrivial for MOPs. To identify and examine the main factors in effective credit assignment strategies, this paper proposes a classification that groups credit assignment strategies by the sets of solutions used to assess an operator’s impact and by the fitness function used to compare those sets of solutions. Nine credit assignment strategies, which include five newly proposed ones, are compared experimentally on standard benchmarking problems. Results show that eight of the nine credit assignment strategies are effective in elevating the generality of a multiobjective evolutionary algorithm and outperforming a random operator selector.
design automation conference | 2015
Nozomi Hitomi; Daniel Selva
Heuristics and meta-heuristics are often used to solve complex real-world problems such as the non-linear, non-convex, and multi-objective combinatorial optimization problems that regularly appear in system design and architecture. Unfortunately, the performance of a specific heuristic is largely dependent on the specific problem at hand. Moreover, a heuristic’s performance can vary throughout the optimization process. Hyper-heuristics is one approach that can maintain relatively good performance over the course of an optimization process and across a variety of problems without parameter retuning or major modifications. Given a set of domain-specific and domain-independent heuristics, a hyper-heuristic adapts its search strategy over time by selecting the most promising heuristics to use at a given point.A hyper-heuristic must have: 1) a credit assignment strategy to rank the heuristics by their likelihood of producing improving solutions; and 2) a heuristic selection strategy based on the credits assigned to each heuristic. The literature contains many examples of hyper-heuristics with effective credit assignment and heuristic selection strategies for single-objective optimization problems. In multi-objective optimization problems, however, defining credit is less straightforward because there are often competing objectives. Therefore, there is a need to define and assign credit so that heuristics are rewarded for finding solutions with good trades between the objectives.This paper studies, for the first time, different combinations of credit definition, credit aggregation, and heuristic selection strategies. Credit definitions are based on different applications of the notion of Pareto dominance, namely: A1) dominance of the offspring with respect to the parent solutions; A2) ability to produce non-dominated solutions with respect to the entire population; A3) Pareto ranking with respect to the entire population. Two different credit aggregation strategies for assigning credit are also examined. A heuristic will receive credit for: B1) only the solutions it created in the current iteration or B2) all solutions it created that are in the current population. Different heuristic selection strategies are considered including: C1) probability matching; C2) dynamic multi-armed bandit; and C3) Hyper-GA.Thus, we conduct an experiment with three factors: credit definition (A1, A2, A3), credit aggregation (B1, B2), and heuristic selection (C1, C2, C3) and conduct a full factorial experiment. Performance is measured by hyper-volume of the last population. All algorithms are tested on a design problem for a climate monitoring satellite constellation instead of classical benchmarking problems to apply domain-specific heuristics within the hyper-heuristic.Copyright