Cin-Young Lee
California Institute of Technology
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Cin-Young Lee.
ieee aerospace conference | 2013
Christopher Delp; Doris Lam; Elyse Fosse; Cin-Young Lee
As Model Based Systems Engineering (MBSE) practices gain adoption, various approaches have been developed in order to simplify and automate the process of generating documents from models. Essentially, all of these techniques can be unified around the concept of producing different views of the model according to the needs of the intended audience. In this paper, we will describe a technique developed at JPL of applying SysML Viewpoints and Views to generate documents and reports. An architecture of model-based view and document generation will be presented, and the necessary extensions to SysML with associated rationale will be explained. A survey of examples will highlight a variety of views that can be generated, and will provide some insight into how collaboration and integration is enabled. We will also describe the basic architecture for the enterprise applications that support this approach.
Journal of Hydrology | 2002
Tien-Chang Lee; Thomas Perina; Cin-Young Lee
Abstract A genetic algorithm is used here to guess-estimate a close-to-true set of trial values as input to a three-staged quasi-linear inverse modeling scheme for the determination of aquifer parameters. To validate the parameter determination, in addition to the conventional measures of misfit root mean squares (rms) and distribution, the aquifer thickness is treated as an unknown parameter and the model parameters are further evaluated by comparing the expected drawdown with the observed drawdown at wells which are not used for parameter determination (extrapolation fitting). The method is tested with synthetic and observed drawdown data from five partially screened monitoring wells in a water-table aquifer. Test results for synthetic data doped with random errors indicate that modeling based on two or more well data can yield satisfactory parameter values and extrapolation misfits in an ideal aquifer. For field data, the results indicate that a model misfit on par with the standard error of the data is achievable for each individual well or a combination of two wells but the extrapolation misfit distributions are generally biased and their rms are far greater—possibly due to aquifer heterogeneity. Consistent parameter values can be obtained from the geometric means for multiple runs of the genetic-inverse modeling of one-, two-, three-, and four-well data. Our test aquifer can be represented by a set of parameters with 10 to 15% consistency, including transmissivity, storativity, vertical-to-horizontal conductivity ratio, and storativity-to-specific yield ratio, as affirmed by model aquifer thicknesses that deviate less than 10% from the actual thickness.
ieee aerospace conference | 2005
Cin-Young Lee; Kar-Ming Cheung; C. Edwards; Stuart Kerridge; Gary Noreen; A. Vaisnys
This paper describes a tool to aid orbit design called the Telecom Orbit Analysis and Simulation Tool (TOAST). By specifying the six orbital elements of an orbit, a time frame of interest, a horizon mask angle, and some telecom parameters such as transmitter power, frequency, antenna gains, antenna losses, required link margin, and received threshold powers for the rates, TOAST enables the user to view orbit performance as animations of two- or three-dimensional telecom metrics at any point on the planet (i.e., on global planetary maps). Supported metrics include: (i) number of contacts; (ii) total contact duration; (iii) maximum communication gap; (iv) maximum supportable rate; and (v) return data volume at a best single rate or with an adaptive rate, along with; (vi) the orbiters footprint and (vii) local solar times. Unlike other existing tools, which generally provide geometry, view periods and link analysis for an orbiter with respect to a single location on the planet, TOAST generates telecom performance metrics over the entire planet. The added capabilities provide the user an extra degree of freedom in analyzing orbits and enable the user to focus on meeting specific mission requirements, such as what data rates can be supported, what data volume can be expected, and what the time gap will be between communication periods. Although TOAST can be used to study and select orbits about any planet, we describe here its use for missions to Mars. TOAST is being used to analyze candidate orbits for the 2009 Mars Telecommunications Orbiter mission. Telecom predicts generated by TOAST for MTO orbit candidates are laying a foundation for selecting the MTO service orbit. This paper presents numerical simulations and telecom predicts for four candidate MTO orbits.
AIAA Infotech@Aerospace 2010 | 2010
Kevin J. Barltrop; Brad Clement; Greg Horvath; Cin-Young Lee
Without rigorous system verification and validation (SVV), flight systems have no assurances that they will actually accomplish their objectives (e.g., the right system was built) or that the system was built to specificatio n (e.g., the system was built correctly). As system complexity grows, exhaustive SVV becomes time and cost prohibitive as the number of interactions explodes in an exponential or even combinatorial fashion. Consequently, JPL and others have resorted to selecting test cases by hand based on engineering judgment or stochastic methods such as Monte Carlo methods. These two approaches are at opposite ends of the search spectrum, in which one is narrow and focused and the other is broad and shallow. This paper describes a novel approach to t est case selection through the use of genetic algorithms (GAs), a type of heuristic searc h technique based on Darwinian evolution that effectively bridges the search for test cases between broad and narrow spectrums. More specifically, this paper describes the Nemesis fram ework for automated test case generation, execution, and analysis using GAs. Results are pres ented for the Dawn Mission flight testbed. I. Introduction inding the fatal flaws or vulnerabilities of comple x systems requires thorough testing. In the tradit ional approach for validating such systems, an expert sel ects a few key high fidelity test scenarios that he or she believes will most likely uncover problems. Each o f the cases is crafted and evaluated by hand. Some times, the test engineer adapts his strategy as he goes along, usin g interesting results from one test case to guide t he selection of new cases. The usefulness of these tests in findin g flaws can be limited by the biases and assumption s of the expert in the selection process. Another approach augments the expert selection process with scripting to walk through many scenarios, traversing values of various test parameters. Evaluation can be automated with test result scoring to prioritize review team attention according to features found in the test r esults. Unfortunately, this approach results in wa sting valuable test time on families of similar cases with little new i nformation gained. Furthermore, because the test t eam must wade through a large volume of results, there is less op portunity to adapt the approach to what is discover ed along the way. In this paper, we describe the application of genet ic algorithms to automated test case selection to e xploit the advantages of both adaptive expert case selection a nd automated test space exploration by evolving tes t scenarios that expose the vulnerabilities of a system under t est (SUT) according to models and scoring functions defined by the test team. The test team controls the scope of test space coverage through what they choose to in clude in the model, and controls the search priorities through t he definition of the fitness function to guide the evolutionary search. Furthermore, the starting point for the se arch is manually specified, allowing the system to cover the ground initially defined as important by the test engineer s, continuing the search into other areas as well, and adapting the search to examine more closely those areas with tel l-tale signs of stress. This paper is organized into the following sections : (I) Introduction, (II) Genetic Algorithm Backgrou nd, (III) Detailed Approach, (IV) Results, and (V) Conclusion.
ieee aerospace conference | 2009
Gian Franco Sacco; Kevin J. Barltrop; Cin-Young Lee; Gregory A. Horvath; Richard J. Terrile; Seungwon Lee
Most complex systems nowadays heavily rely on software, and spacecraft and satellite systems are no exception. Moreover as systems capabilities increase, the corresponding software required to integrate and address system tasks becomes more complex. Hence, in order to guarantee a systems success, testing of the software becomes imperative. Traditionally exhaustive testing of all possible behaviors was conducted. However, given the increased complexity and number of interacting behaviors of current systems, the time required for such thorough testing is prohibitive. As a result many have adopted random testing techniques to achieve sufficient coverage of the test space within a reasonable amount of time. In this paper we propose the use of genetic algorithms (GA) to greatly reduce the number of tests performed, while still maintaining the same level of confidence as current random testing approaches. We present a GA specifically tailored for the systems testing domain. In order to validate our algorithm we used the results from the Dawn test campaign. Preliminary results seem very encouraging, showing that our approach, when searching the worst test cases, outperforms random search , limiting the search to a mere 6 % of the full search domain.
ieee aerospace conference | 2003
Cin-Young Lee; Kar-Ming Cheung
At different time periods in the future, missions to Mars will overlap. Previous studies indicate that during such periods, existing deep space communication infrastructure cannot handle all Mars communication needs. A plausible solution is to take into account the end-to-end communication performances of network along with operational constraints, and optimize the resource usage by scheduling communication at highest possible data throughputs. As a result, shorter communication time is required and more missions can be accommodated. This principle is demonstrated in this paper for a Mars relay communication network; a network consisting of multiple surface units and orbiters on Mars and the Deep Space Stations.
international symposium on neural networks | 2002
Cin-Young Lee; Erik K. Antonsson
An evolutionary algorithm is developed to address simultaneous learning of weights and connection topologies for a feedforward neural network. The algorithm is dependent on an embedded representation of the network, in which architecture specification is determined from the interactions of the embedded nodes. Preliminary results are presented and discussed.
ieee aerospace conference | 2005
Cin-Young Lee; Kar-Ming Cheung
The 2005 Mars Reconnaissance Orbiter (MRO) mission has the primary objective of placing a science orbiter into Mars orbit to perform remote sensing investigations that will study the surface, subsurface and atmosphere of Mars and will identify potential landing sites for future missions. Its relay service for a landing asset on the Martian surface will not begin until the arrival of the 2009 Phoenix Mars Lander. Among the many newly added features such as finer resolution camera, advanced instruments, and the second-generation communication radio Electra, MRO is capable of rolling the spacecraft at a fixed angle during its flight over a surface asset. This improves MROs link performance and thus, increases the overall data throughputs. In our study, we are interested in determining the optimum roll angle for each pass relative to nadir pointing and its impact in terms of data throughput gain. In this paper, we present a mathematical formulation, which is based on the constrained optimization framework, to determine the optimum roll angle. The model is designed to find the best roll angle for each pass, where the roll angle is bounded by plusmn30deg so that the resulting data throughput is largest. The data throughput is computed based on MROs and Phoenixs geometry, the roll angle, and the telecom parameters and capabilities, including supportable data rates, antenna patterns, etc. Other important quantities include the statistics of the passes of different peak elevation angles, margins, day passes, duration and frequency of the passes, type of data rates, etc. These questions can provide a general guideline to facilitate the planning process between the science and the navigation teams. Long-term simulation and study have also been implemented. Statistical findings based on the simulation for several possible future landing sites will be presented
Formal engineering design synthesis | 2001
Cin-Young Lee; Lin Ma; Erik K. Antonsson
Archive | 2002
Erik K. Antonsson; Cin-Young Lee