Enabling and Enhancing Astrophysical Observations with Autonomous Systems
Rashied Amini, Steve Chien, Lorraine Fesq, Jeremy Frank, Ksenia Kolcio, Bertrand Mennsesson, Sara Seager, Rachel Street
EEnabling and Enhancing Astrophysical Observationswith Autonomous Systems
Rashied Amini , a , Steve Chien , Lorraine Fesq , Jeremy Frank , Ksenia Kolcio ,Bertrand Mennsesson , Sara Seager , Rachel Street July 10, 2019
Endorsements
Patricia Beauchamp , John Day , Russell Genet , Jason Glenn , Ryan Mackey ,Marco Quadrelli , Rebecca Ringuette , Daniel Stern , Tiago Vaquero NASA Jet Propulsion Laboratory NASA Ames Research Center Okean Solutions Massachusetts Institute of Technology Las Cumbres Observatory California Polytechnic State University University of Colorado at Boulder University of Iowa a [email protected] (cid:13) a r X i v : . [ a s t r o - ph . I M ] S e p Executive Summary
Autonomy is the ability of a system to achievegoals while operating independently of exter-nal control [1]. The revolutionary advantagesof autonomous systems are recognized in nu-merous markets, e.g. automotive, aeronautics.Acknowledging the revolutionary impact of au-tonomous systems, demand is increasing fromconsumers and businesses alike and investmentshave grown year-over-year to meet demand. Inself-driving cars alone, $76B has been investedfrom 2014 to 2017 [2]. In the previous PlanetaryScience Decadal, increased autonomy was identi-fied as one of eight core multi-mission technolo-gies required for future missions [3].The impact of autonomous systems on ourability to observe the universe can be just asrevolutionary [4]. However, relevant autonomywork to date has been limited in scope and toodisjoint to confidently deliver anticipated capa-bilities, like in-space assembly (ISA), in a lowrisk and repeatable manner in the 2020s or eventhe 2030s. This paper includes the following sothat the astrophysics community can realize thebenefits of autonomous systems: • A description of autonomous systems withrelevant examples • Enabled and enhanced observations withautonomous systems • Gaps in adopting autonomous systems • Suggested recommendations for adoption bythe Astro2020 DecadalAs we consider the observations necessaryto answer new science questions formed in the2010s, the need for autonomy is clear. Conceptstudies for the Astro2020 Decadal require opera-tions that are more complex than ever before.Increasingly complex space- and ground-basedobservatories have more systems, components,and software. More engineering complexity in-variably means that there are more paths foranomalies to disrupt a system’s ability to per-form its mission. This can reduce observationalefficiency and potentially negate the advantagesof larger apertures and more sensitive detectors. Servicing is a legal requirement for WFIRSTand the Flagship mission of the 2030s [5], yetpast and planned demonstrations may not pro-vide sufficient future heritage to confidently meetthis requirement. In-space assembly (ISA) is cur-rently being evaluated to construct large aper-ture space telescopes [6]. For both servicing andISA, there are questions about how nominal op-erations will be assured, the feasibility of teleop-eration in deep space, and response to anomaliesduring robotic operation.The past decade has seen a revolution inthe access to space, with low cost launch ve-hicles, commercial off-the-shelf technology, andprograms that have enabled numerous cubesatmissions. NASA and academic institutions willbe operating more small satellites and opera-tions centers will need to adapt. The need willbe greater if future human exploration goals tolaunch dozens of cubesats per SLS launch ismet [7]. Operating autonomous observatoriesprovides one solution to this impending prob-lem. Notably, several ground-based observato-ries, like Las Cumbres and ALMA observato-ries, have begun using autonomous operationsto command large arrays of telescopes, identi-fying advantages for observatories that followtheir example. Planet and presumably SpaceX’sStarlink, private space mission operators, havereached a break point where traditional com-manding is inadequate to command their largeconstellations and are operating spacecraft withautomated scheduling [8].Gehrels/Swift is an inspiring example of thetime-domain observations that autonomous sys-tems enable. The multi-messenger approach forcharacterizing the physics leading to and result-ing from gravity wave events will require sim-ilar missions to Gehrels/Swift. Gehrels/Swiftrelies on prescriptive state machines, statically-programmed conditions and routines also usedin spacecraft fault protection, to execute au-tonomous Gamma-ray burst (GRB) follow-upobservations. The system autonomy approachdetailed in this paper offers several advan-tages over state machines in terms of dynamicdecision-making and scalability. One major ad-vantage is the ability to make decisions using on-1 igure 1:
Effective, reliable autonomous systemsmust coordinate between the resources utilized by asystem’s lower level functions to achieve system-levelgoals. Appendix A offers an illustrated example of asystem autonomy framework. board analysis of data to change an observationprogram.Dynamic decision-making also enables therestoration of functionality in the event of ananomaly. This type of decision-making is en-abled by on-board health monitoring software,which monitors and diagnoses hardware anoma-lies to support autonomous systems. This re-sults in greater observational efficiency and uni-versally benefits all observatories. For observa-tories with competed time, this means more PIscan be supported. For mapping missions, likethe Galaxy Evolution Probe, Probe of Inflationand Cosmic Origins, and Cosmic Dawn IntensityMapper Probe, greater depths can be reachedper unit time [9, 10, 11]. For time-domain sur-veys, this results in less gaps in data.As evidenced by private investments and de-velopments in ground-based observatories, theadoption of autonomous systems in space is in-evitable. There are two questions to the field: “When will we start using it?” and “How willwe start using it?”
Given the ambitions ofthe community, the time to begin is now. Inorder to use it in a repeatable, low risk, andcost-effective way, NASA, spacecraft vendors,and the astrophysics community need to coop-eratively develop a coherent technical path for-ward. To do so, our primary recommenda-tion is for NASA to incentivize the use ofautonomous systems for competed space missions, for instance through a cost cap credit.Adoption in the 2020s will reduce the risk of fu-ture Flagship servicing missions.
Observing the proceedings of the Space Astro-physics Landscape in 2020 and Beyond meeting,it is clear that a gap exists between the expec-tations of the astrophysics community and thetechnical readiness of autonomy technologies re-quired to meet these expectations. To under-stand this gap, we need to first define autonomyin a relevant context.A hierarchy of systems is represented in Fig-ure 1. At the bottom of the hierarchy is thefunctional-level, where control and autonomy isexercised in a limited domain. Functional controlis the commanding actuators and sensors, e.g. acommand is sent and a motor turns at a com-manded rate. Functional autonomy is decision-making within the boundaries of the functionalelement. A simple example is a state machinethat (dis)engages a heater based on thermome-ter input. A more complicated example is anattitude controller that takes inputs of attitudeknowledge ( e.g. star trackers). Its output is con-trol system actuation to maintain a desired atti-tude. Pre-programmed routines filter inputs andevaluate conflicting knowledge, resulting in pre-dictable behavior.More complex forms of functional auton-omy have already been demonstrated and arecurrently being developed. For instance, au-tonomous optical navigation determines devia-tion from desired orbit ephemeris and has beenused on Deep Space-1, Deep Impact/EPOXI,other planetary missions, and soon ArcsecondSpace Telescope Enabling Research in Astro-physics (ASTERIA) [12, 13]. On-going workon servicing and ISA utilizes computer vision asa knowledge source to control robotic actuation[6]. On-orbit robotic servicing was first demon-strated on DARPA’s OrbitalExpress in 2007 [14].In the next few years, RESTORE-L will be usedto service Landsat-7 in low earth orbit using tele-operation after autonomous docking [15].However, functional elements utilize system2esources, e.g. time, power, attitude, data stor-age, etc . Spacecraft are resource limited andefficient use is critical to mission success. Dif-ferent activities may utilize resources in a mu-tually exclusive way; for instance, a space tele-scope may not be able to point its telescope at atarget while simultaneously pointing its antennatoward Earth for communications. Some re-sources are zero-sum but accommodating of mul-tiple spacecraft goals; for instance, all poweredequipment require power but not all subsystempower modes can be supported simultaneously.Thus, there is a state of competition betweendifferent system goals. In the current state ofpractice, this competition is resolved by humanplanning during operations. Tools are used todefine system activities, like observing and trans-mitting data, based on commands that are tiedto certain resources. The goals of the scientiststo observe the sky and goals of the engineers topreserve the spacecraft are merged using thesetools to develop time-ordered sequences of com-mands that are uplinked to the spacecraft, e.g. [16]. An extension of time-ordered sequences isconditional sequencing, where sequences use con-ditional statements as a state model. This ap-proach has the capability of storing pre-definedroutines that can later be executed [17].Autonomy poses a challenge to operationalplanning: how can you command a system thatmakes its own decisions? State machine-basedautonomy is predictable in well-defined environ-ments, and so resource budgets can be allottedbecause the input domain is well characterized.Spacecraft health is further ensured by fault pro-tection state machines, adding another layer ofprotection. The use of state machines enablesGehrels/Swift to detect GRBs with the wide-field Burst Alert Telescope and slew to observewith its two other payloads [18]. However, ma-chine learning-based decision-making and vari-able environments, exemplified by computervision-guided robotic control, means that re-source utilization cannot be effectively boundedin advance and so reliable, safe operation cannotbe readily assured with traditional commanding.Coordination of resources used by functionalelements, prescriptive or not, can be accom-
Figure 2:
Task networks offer numerous pathways intime and state-space to achieve goals requested fromground operators. Implementing tasknet-based com-manding enables “push button-get science” missions. plished at the system-level through on-boardplanning and execution. This approach contrastswith traditional commanding with sequencesthrough its use of task networks (tasknets,though goal and constraint networks are alsoused in the literature), described in Figure 2.Tasks are defined as commands that are associ-ated with metadata defining the state conditionsrequired for their execution and state impactsthat result from their execution. Thus, graphnetworks of tasks can be constructed with tasksas nodes and edges connecting tasks whose stateimpacts are the state requirements of anothertask. Moreover, tasks can be have temporal con-straints to be sequence-like. In this manner anautonomous system can be commanded like atraditional system if desired.Sets of tasks can be defined as independent,uniquely prioritized system goals. Some goalsidentify system state transitions, such as the ac-quisition of new science data. Other goals iden-tify states that need to be maintained and re-stored if lost, such as those related to spacecrafthealth. The role of on-board planning and exe-cution is to negotiate between the constraints ofall goals so that they can be executed without3onflict or in violation of safe resources limits.The final level of hierarchy at the top of Fig-ure 1 supports multiple autonomous systems ina multi-agent architecture.
Spitzer Space Telescope, Dawn, Juno, and manyother planetary missions have made use of condi-tional sequencing with the Virtual Machine Lan-guage (VML). Spitzer reported several advan-tages over traditional sequencing using VML. Inparticular, it made observations contingent ontelescope settling state rather than sequencedtime, which added one or two extra observationsin an 11 hour observing window. It also had theadvantage of reducing spacecraft safing due toon-board memory overflow [19]. However, as re-ported in [19], the limiting factor in implement-ing more of these autonomous behaviors was thatthere was “no fast and effective way of modelingthe flight system behavior on the ground.”Autonomous systems relying on on-boardplanning and execution are beginning to see in-creased use on space- and ground-based observa-tories. A prominent example is the use of AS-PEN/CASPER on Earth Observer-1, which usedon-board science planning and execution to de-tect novel terrestrial scenes, like disasters, to au-tonomously perform follow-up observations [20,21]. Extending the work of CASPER, the Intel-ligent Payload EXperiment (IPEX) cubesat ex-ecuted one year of autonomous payload opera-tions using its on-board planner [22, 4]. PLanExecution Interchange Language (PLEXIL) isfunded to be used to for a technology demon-stration mission of multi-agent autonomy. Laterin 2019, ASTERIA will demonstrate the use ofthe Multi-mission EXECcutive (MEXEC). Nextyear, Mars 2020 will use the Onboard Schedulerto maximize science return by using excess timeand power at the end of each Martian sol to planadditional measurements [23, 24, 25]. Temporalplanning and scheduling systems also include Ix-TeT [26], used for robotic contorl, and EUROPA[27]. Other systems have been developed basedon similar principles since then, notably IDEAand T-REX, used for autonomous underwater vehicles [28]. Appendix A offers an example ofhow these software are implemented in practice.Las Cumbres Observatory and Atacama LargeMillimeter/submillimeter Array (ALMA) are ex-amples of ground-based observatories whose op-erations are autonomously planned and exe-cuted. Las Cumbres Observatory is a networkof 18 telescopes at six sites that operate as asingle observatory, enabling persistent observa-tion. Scientist request observations, which areassigned and scheduled through a global sched-uler [29, 30]. General-purpose software has beendeveloped for autonomous telescope operationsthat can be adopted by future observatories op-erating on these principles [31]. ALMA dynami-cally schedules and executes 30 minute “schedul-ing blocks” based on weather, science priority,project completion, and other parameters [32].Automated scheduling has traditionally beenused for operational planning. Most relevantly,Space Telescope Institute uses SPIKE for plan-ning Hubble observations [33]. Planet uses au-tomated scheduling to operate its fleet of earthobserving cubesats. In human spaceflight Time-liner has seen significant use on-board the Inter-national Space Station (ISS) and is being con-sidered as a candidate for the Lunar Gateway[34]. While automated planning streamlines op-erations, it still has drawbacks when the plancannot succeed due to operational conditions. Autonomous systems have already enabled newastrophysics. Both ground- and space-basedtransient event observatories are fundamentallyenabled by autonomous systems. Autonomoustransient event detection and follow-up obser-vation capability has been demonstrated withGehrels/SWIFT and the Zwicky Transient Fa-cility [35]. As exemplified by LCO, time-domainastronomy observations, e.g. supernovae, mi-crolensing, near earth asteroids, tidal disruptionevents, gravitational wave events, etc. requirereal-time, highly reactive telescope scheduling.In these cases, observations cannot be planned4n advance and the configuration of the observa-tions may need to evolve over time according tothe characteristics of event.With the projected improvements to ground-based detection and localization of gravity wave(GW) events, there is a need for observato-ries that can rapidly observe potential multi-messenger signals. Ground-based observatorieswill require the ability to respond to externalsignal, verify observability of the GW ellipsegiven current observatory conditions, and re-task to perform GW follow-up observation whilemaintaining knowledge of the past observation.Space-based observatories will be required todo the same while also maintaining spacecrafthealth. Gehrels/SWIFT itself was launched in2004 and may need replacement in the 2020s toretain the community’s ability to perform GRBdetection and localization over large areas of thesky. If ESA’s Theseus is selected for M5, systemautonomy software and expertise can serve as apotential NASA contribution to that mission.Given the past priorities of the AstrophysicsDecadal and NASA funding, it is expectedthat space-based time-domain observatories willbe competed and are subjected to cost cap.For instance, the Gravitational-Wave Ultravio-let Counterpart Imager (GUCI) has already beenproposed for the SmallSat call [36]. These mis-sions can be architected using state machine au-tonomy, following the Gehrels/SWIFT. Giventhe bounded nature of time-domain observationsand the additional advantages afforded by on-board planning/execution, we note that thesemissions can alternatively use on-board plan-ning/execution as a relatively low risk means ofdemonstrating and increasing the community’sconfidence in the technology.As discussed in [6], system-level autonomy isrequired for ISA and servicing in order to co-ordinate robotic autonomy with the rest of thespacecraft. One example of how critical system-level autonomy is to ISA and servicing is the co-ordination of a mass model as robotic operationis performed. At a high-level, a servicer space-craft has the goals of performing robotic oper-ation and assuring attitude control in the pres-ence of disturbance (gravity gradient, solar pres- sure). To accomplish the former goal, a roboticarm moves, changing the spacecraft’s moment ofinertia. To accomplish the second goal, the at-titude control system maintains attitude basedon a model of the spacecraft’s moment of iner-tia. If robotic action is not coordinated, the at-titude controller’s moment of inertia model willnot be consistent with reality. This may lead toover/under actuation of reaction wheels, poten-tially leading to collision risk and mission failurefor both spacecraft.Multi-agent autonomy also enables new obser-vations. The AEON Network is ground-basedfacility currently under development operatingnumerous telescopes that will allow astronomersto submit requests for observation in real-time.Through multi-agent autonomy, a large networkof ground- and space-based observatories, likeAEON, can coordinate their observing programsacross multiple facilities and wavelengths, serv-ing as a powerful tool for characterizing new dis-coveries. Multi-agent autonomy can also be usedon a constellations of low cost satellites as dis-tributed transient event, namely GRB, observa-tories [37, 38]. By using low cost scintillating de-tectors on low cost smallsat/cubesat platforms,localization can be performed through time-of-arrival similar to the Interplanetary Network.One advantage of this approach is the timeli-ness of observation. For instance, in simulationsof flooding event observations by an earth ob-serving constellation, a multi-agent architecturemeasured flood area to 96% accuracy over timeas opposed to a centrally planned architectureobserving with 70% accuracy, owing to the time-liness of observation [39]. Additionally, multi-agent coordination of more than two assets maybe required, or would greatly facilitate, interfer-ometry missions such as LISA.A unique class of missions that would bene-fit from ISA and multi-agent autonomy are ra-dio and possibly NIR/optical/UV interferometrymissions that may require ISA of large aperturesand coordination.Autonomous systems also complement the in-creased access to space afforded by small satel-lites and low-cost launch vehicles. As the totalnumber of missions increases, let alone missions5hat may utilize more than one spacecraft suchas GUCI, the ground stations and operationsfacilities become a bottleneck for commandingand monitoring spacecraft. At some point, largenumbers of traditional spacecraft cannot be effi-ciently commanded through traditional means.Autonomous systems reduce the human effortrequired to command as the burden can be off-loaded to an on-board planner.Current plans for human exploration offer newplatforms for astrophysics missions, creating newopportunities for the development of observato-ries. Most imminently, lunar exploration maycreate dozens of opportunities for new measure-ments. With cubesats piggybacking launchesand opportunities to use the Lunar Gateway asa platform for payloads, managing multiple mis-sions and scheduling observations that may haveconflicting pointing and thermal requirements onLunar Gateway becomes increasingly difficult tocoordinate across multiple teams [40]. Again,an operational bottleneck results that can be re-solved through automated planning. Addition-ally, returning to the moon creates new opportu-nities for lunar surface-based observatories. Thisoffers unique opportunities for some radio bands,cosmic ray, MeV γ -gay, X-ray, and UV measure-ments that cannot be made from Earth’s sur-face. A Probe mission concept, FARSIDE, is a ∼
10 MHz radio observatory on the farside of theMoon. As it requires a rover for deployment,autonomous mobility and robotic assembly ca-pability is critical to mission feasibility. [41]
The traditional paradigm of commanding re-duces the overall efficiency of targeted ob-serving programs as observation length is pre-determined in advance. Later, data is down-linked and analyzed on the ground. However,the efficiency of observing programs can be im-proved by analyzing data on-board to inform sys-tem decision-making.One example is exoplanet direct imaging, ex-emplified by HabEx and LUVOIR, that requiresa level of 10 − raw contrast to perform di-rect imaging of exo-Earths. This raw contrastcan only be effectively achieved in cases where exozodiacal light is not so bright that it re-duces the effective raw contrast at the exo-planet’s location. Even if exozodical light ispreviously characterized in mid-IR wavelengths[42], these observations may not predict the ex-ozodiacal light at HabEx/LUVOIR near UV tonear IR wavelengths. Additionally, not all sys-tems will have constrained inclination that im-pacts the apparent brightness of the exozodia-cal dust. Currently, HabEx and LUVOIR willschedule their observations in advance and use apre-determined observing program based on lit-tle or no knowledge of the actual level of exozodioptical brightness around individual targets.In an autonomous system, coronagraphicimaging can be analyzed on-board the spacecraftto evaluate the contribution of exozodiacal lightand the determine the value of continuing obser-vation. In this case, excessive exozodical lightcan be detected on-board within a fraction ofthe planned observation time. On-board dataprocessing software can then alert the on-boardplanner to truncate the observation so the ob-servatory can perform the next scheduled obser-vation. Data from the truncated observation islater downlinked for future analysis. In this ex-ample, more targets are observed more quickly,resulting in more observing time for other targetsof interest and greater exo-Earth yield during theprimary mission [43]. Recommendation:
NASA should useROSES as a means of funding software de-velopment for on-board data processing.The advantage of on-board data processingin union with a system planner is not limitedto space-based observatories. Subsystems thatevaluate weather and seeing conditions can aid toautonomously reschedule planned observationsthat may not be possible when scheduled, im-proving their net efficiency.
Recommendation:
NASA and NSFshould incentivize the development of futureground-based observatories with automatedscheduling/execution, following the exampleof ALMA and LCO.As discussed above, on-board data processing6nd multi-agent autonomy can be used in a co-ordinated network of ground- and earth-based toperform GW follow-up observatories. In such anarchitecture, localization that is currently per-formed post-hoc as data is released can be per-formed on-board within the constellation, result-ing in localization while the source is still emit-ting brightly.
Traditional space systems have fault protectionschemes that enter safe modes, requiring humandiagnosis and commanding to restore nominaloperation. As a result, 4% of nominal space-flight operations are blocked by spacecraft saf-ings [44]. Notably, [44] presents a lower boundon blocked operational time, as other anomaliescan occur that restrict nominal operations anddo not cause safing.Figure 3 indicates that on-board plan-ning/execution with on-board health diagnosismay mitigate the impact of about 50% of saf-ings. This adds an additional week of nom-inal operations per year. There are two rea-sons. Rather than relying on state machines forexecuting fault protection, health maintenancetasknets can restore the minimum functionalityrequired to perform science operations while notendangering spacecraft health [45, 46]. Second,this architecture permits integration of on-boardhealth diagnosis to monitor the health of hard-ware and local models for attitude knowledgeand control, a major cause of safing events, to in-form these health maintenance tasknets [47, 48].While an additional week of data per spaceobservatory may seem marginal, if applied toNASA’s fleet of space-based observatories itwould result in 11 additional weeks of scienceper year for the community. The benefit is mostuseful to observatories with PI-directed observa-tions, like Hubble and Spitzer, where additionalPIs can be supported. For mapping missions,like GEP, additional mapping depth per unittime is achieved. For time-domain surveys, cov-erage is more complete in time.Even without on-board planning/execution,health diagnosis models and software can im-
Figure 3:
Histogram of safing events binned on thenumber of days between suspension and restorationof nominal operation. With on-board planning andexecution and on-board health diagnosis, about 50%of anomalies resulting safing may be averted. Resultbased on analysis of the [44] safing dataset. prove ground- and balloon-based observatories.Ballooning in particular suffers from a high fail-ure rate, owing from ad hoc integration of mul-tiple payloads on-site and schedule constraintsforcing limited testing. Recently, NASA JPLevaluated technologies for a self-reliant roverduring which on-board health diagnosis wasfound to be effective in the build, integration,and testing environments in discovering and di-agnosing hardware issues previously undetected[49, 50, 51]. Health diagnosis software can beused for ballooning systems that are used re-peatedly, such as mirror motor control, pressurevessels, and power generation, to detect hard-ware issues. This can reduce complexity andstress during the balloon integration phase andimprove success rate of balloon missions.
Recommendation:
Integrate the use ofhealth diagnosis software for elements thatare repeatedly used on ballooning platforms.
For astrophysics, autonomous systems can en-able and enhance missions that deliver revolu-tionary data sets, reduce the cost of missions,and reduce the burden on scientists in developingand maintaining observing programs. A futurewhere “press button − get science” missions ison the horizon, but work remains that requires7he community’s awareness and support.The primary gap is cultural. Autonomoussystems imply a different paradigm of de-sign and operations compared to traditionally-commanded systems. Compared to autonomyin the private sector, a small proportion of ourspace science and spaceflight communities haverelevant expertise to review the opportunitiesand risks associated with autonomous systems.This is compounded by traditional engineeringpreference for heritage designs and expectationsof predictability. On point, how can scientists,engineers, and proposal reviewers be confident ina mission concept that operates itself? Is it pos-sible to design and deploy autonomous systemsthat are partially autonomous to placate the con-cerns of the community? These questions needto be formally addressed if NASA is to meet itslegal requirement to perform servicing require-ment for future large, space-based observatories,let alone to reap the benefits of autonomous sys-tems for observation.Limited institutional capacity to adopt au-tonomous systems is exemplified by the exam-ples of autonomous systems above: most of thesemissions were or will be designed and built byNASA. Given the high cost and risk associatedwith changing the process by which spacecraftare designed, built, and tested, spacecraft ven-dors have till now relied on conditional sequenc-ing and not autonomous planning/execution.Thus, government-industry cooperation is re-quired to make use of autonomous systems re-liably and repeatably for all NASA missions. Recommendation:
NASA should incen-tivize the use of autonomous systems forcompeted space missions. Specifically,small sat missions, missions of opportunity,SMEX, MIDEX, and Probe missions can in-clude a credit for using the technology.
Wenote that transient event observatories offera low risk path to maturing this critical tech-nology.
Another aspect of the cultural gap is NASA’sdefinition of technology readiness and its refer-ence to “operational environment” that is overly restrictive to software technologies that can beeffectively validated outside of the operationalenvironment, e.g. on-board science data process-ing software.
Recommendation:
NASA should evaluatethe applicability of the Technology ReadinessLevel as a means of evaluating the maturityof autonomy and on-board data processingsoftware.Other gaps are technical. Autonomy frame-works, described in Appendix A, define rules forhow system-level planning and execution inter-face with traditional components and systemsand functional autonomy. Community accep-tance of these frameworks can reduce adoptionrisk and promote repeatability by permitting tra-ditional design and operations approaches. Bydefining a convention for how autonomous mis-sions should be designed and built, frameworkswould also improve reviewability of autonomousmissions and portability of testing methodology.Remaining work includes improving the verifi-ability of tasknets, which is critical to reapingthe benefits of integrated fault protection. Re-latedly, telemetry that permits reconstruction ofon-board decision-making requires further studyand definition. Ground systems and tools forcommanding of autonomous spacecraft requirefurther maturation.Finally, some observations will benefit fromon-board data analysis. For these observations,new software will be required to perform thisfunction, which will be the responsibility of sci-ence community. While not the subject of thiswhite paper, processing-intensive data process-ing may require high performance computing.High performance computing does not necessar-ily enable autonomous systems, but is enhanc-ing by permitting intensive processing of sciencedata and on-board scheduling over larger searchspaces.
Recommendation:
NASA should fundtechnology demonstrations of high-performance space-based computing foron-board data processing.8
Example of a System AutonomyFramework
In order to create autonomous systems repeat-ably and reliably, a framework must be de-fined. Similar to a legal constitution, a sys-tem autonomy framework defines responsibili-ties and capabilities of system components andhow they interface with one another in govern-ing system behavior. For instance, the man-ner in which on-board science data processing isinterfaced to inform decisions at a system-levelshould be identical across all missions regardlessof which decisions it informs. This enables re-liable mutli-mission adoption of on-board plan-ning/execution. Under a unified framework, en-gineers, scientists, and managers can work to-ward the same set of requirements that assuremission success. Reviewers can also use the sameframework to verify compliance. Without a uni-fied framework, development, mission assurance,and review can become intractable given thecomplexity in designing autonomous systems.There are several requirement that define aneffective autonomy framework. It should makeguarantees about acceptable behavior, enableconfident operator oversight and insight, readilyaccommodate new information, and not requireextensive tailoring or ad hoc modification to sup-port multiple missions. Pragmatically, such aframework must afford a practical path towardadoption. To do so, system autonomy must in-tegrate with existing components. Human work-flows involved across mission phases should de-viate minimally from existing practice. Also, theframework must support varying degrees of au-tonomy – permitting sequence-like commandingto highly autonomous operation. Without thispractical path, NASA and industry partners willhave to invest in brand new software and pro-cesses and accept significant risk in implement-ing a major leap toward systems autonomy atonce.There are several examples of such a frame-work. The Framework for Robust Execution andScheduling of Commands On-Board (FRESCO)is under development at NASA JPL; the NASAPlatform for Autonomous Systems (NPAS) is under development at NASA SSC; and, a vehi-cle management system for autonomous space-craft habitat operations is under development atNASA ARC and JSFC [13, 52, 53]. Finally, theEuropean Robotic Goal-Oriented AutonomousController (ERGO) has been under developedby an EU-funded consortium of industry andacademia [54]. Below, we use FRESCO as anexample to illustrate how systems autonomy isimplemented.
Tasknet
Tasknets are data structures that en-capsulate the potential envelop of spacecraft be-havior. They are graph networks where nodesare tasks and the edges are the state and tempo-ral dependencies between tasks. Tasknets can bedefined as goals for the spacecraft to achieve, totransition states ( e.g. an imaging survey goal re-sults in a set of images being taken) and to main-tain states ( e.g. a pointing knowledge mainte-nance goal restores pointing knowledge throughoptical navigation if a knowledge uncertaintythreshold is violated). Tasknets have been de-scribed in literature since the 1970s [55, 56].
Planner and Executive
Planners create andmaintain schedules of tasknets based on theirprioritization and projected timelines of futurestates. Scheduling tasks is performed by a searchfunction, whose search space can be constrainedbased on how tasks are defined. This permitstraditional, sequence-like behavior or highly au-tonomous behavior within the same framework.At a certain time before scheduled execution, theplanner passes tasks to the executive for execu-tion. Executives are responsible for intelligentexecution and monitoring the impact of executedtasks. Under nominal operation, they receive re-ceipt of successful task execution and proceed todispatch the next scheduled tasks. If a task fails,they can exercise contingency behaviors speci-fied by the task, which can include replanningrequests to the planner.Currently, MEXEC and PLEXIL are main-tained by NASA JPL and ARC, respectively [24,57]. A wider survey of command execution sys-tems is presented in [58].
State Database
A state database serves asa “single source of truth” for the system, main-taining component status and abstracted system9 igure 4:
An example of an autonomous system framework, the Framework for Robust Execution andScheduling of Commands On-Board (FRESCO), defines capabilities and interfaces that will resulting inrepeatable and predictable implementations of systems autonomy for complex systems, such as spacecraft.This figure offers a simplified description of FRESCO components and interfaces. states used in decision-making.
System-Level Estimator
Estimators thatinform decision-making use system telemetry asinput to models of system behavior. Systemhealth monitoring software is one such estima-tor. System health monitoring serves two pur-poses. First, it can be used to identify poten-tially faulty components to alert operators to po-tential future risks. In rover testing, it was ableto successfully identify undiagnosed hardwareproblems[51]. Second, it permits the creationof tasknets that operate only if healthy compo-nent states are reported, reducing the risks ofautonomous operation. MONSID, developed byOkean Solutions, uses linked models of hardwarebehavior to monitor the health status of compo-nents [48, 59, 51].
Function-Level Software and Compo-nents
Function-level software can perform amultitude of functions, ranging from hardwarecontrol to data processing and functional auton-omy. Depending on its function, its internal in- terfaces will vary. For instance, if a hardwarecontroller includes local fault protection, an in-terface for a signal to interrupt system-level exe-cution over that controller’s domain is required.Traditional hardware controllers and on-boarddata processing can also be used to inform thescheduling of tasknets. The on-board processingfor exozodical light in exoplanet coronagagraphyserves as an example.
B Acknowledgements
Thank you to Ellen Van Wyk, NASA JPL, forillustrations included in this white paper andSpaceX for the cover photograph.10 eferences [1] NASA Autonomous Systems – SystemsCapability Leadership Team.
AutonomousSystems Taxonomy . 2019.[2] CF Kerry and J Karsten.
Gauging invest-ment in self-driving cars . url : .[3] Space Studies Board, National ResearchCouncil, et al. Vision and voyages for plan-etary science in the decade 2013-2022 . Na-tional Academies Press, 2012.[4] Steve Chien and Kiri L Wagstaff. “Roboticspace exploration agents”. In:
ScienceRobotics
National Aeronautics andSpace Administration Authorization Act of2010: report of the Committee on Com-merce, Science, and Transportation on S.3729 . U.S. G.P.O., 2010.[6] Rudranarayan Mukherjee et al. “Whenis it worth assembling observatories inspace?” In: (July 2019).[7] Kimberly F Robinson and Andrew Schorr.“NASA’s Space Launch System: Excep-tional Opportunities for Secondary Pay-loads to Deep Space”. In: . 2018, p. 5138.[8] V Vitaldev. “Automated Scheduling andOperation of a Heterogeneous Fleet ofSatellites”. In: IWPSS 2019. 2019.[9] Jason Glenn et al. “The Galaxy EvolutionProbe: a concept for a mid and far-infraredspace observatory”. In:
Space Telescopesand Instrumentation 2018: Optical, In-frared, and Millimeter Wave . Vol. 10698.International Society for Optics and Pho-tonics. 2018, p. 106980L.[10] S Hanany. “PICO Probe of Inflation andCosmic Origins”. In: The Space Astro-physics Landscape for 2020 and Beyond.Apr. 2019. [11] T-C Chang for the CDIM Team. “CDIMCosmic Dawn Intensity Mapper”. In: TheSpace Astrophysics Landscape for 2020and Beyond. Apr. 2019.[12]
Deep Space-1. Navigation: Primary Mis-sion . JPL: DESCANSO Design and Per-formance Summary Series, Apr. 2004.[13] Lorraine Fesq and Rashied Amini. “ANew Paradigm for Autonomous Space-craft: From Research to Deployment”. In:Low Cost Planetary Missions (LCPM-13).2019.[14] T Weismuller and M Leinz. “GN&C tech-nology demonstrated by the orbital expressautonomous rendezvous and capture sen-sor system”. In: . American As-tronautical Society. 2006.[15] R Ticker.
Restore-L Mission Information . url : .[16] P Maldague et al. “APGEN: A multi-mission semi-automated planning tool”.In: First international NASA workshop onplanning and scheduling . 1998, pp. 363–365.[17] Christopher Grasso and Patricia Lock.“VML sequencing: Growing capabilitiesover multiple missions”. In:
SpaceOps 2008Conference . 2008, p. 3295.[18] D Palmer.
GRB Follow Up ObservationAutonomy . In correspondence. June 2019.[19] David S Mittman and Robert Hawkins.“Scheduling Spitzer: the SIRPASS story”.In: (2013).[20] S. Chien et al. “ASPEN - AutomatingSpace Mission Operations using Auto-mated Planning and Scheduling”. In:
In-ternational Conference on Space Opera-tions (SpaceOps 2000) . Toulouse, France,June 2000.1121] Daniel Tran et al. “The autonomous sci-encecraft experiment onboard the EO-1 spacecraft”. In:
Proceedings of thefourth international joint conference onAutonomous agents and multiagent sys-tems . ACM. 2005, pp. 163–164.[22] Steve Chien et al. “Onboard autonomyon the Intelligent Payload EXperiment(IPEX) CubeSat mission: A pathfinder forthe proposed HyspIRI mission intelligentpayload module”. In:
Proc 12th Interna-tional Symposium in Artificial Intelligence,Robotics and Automation in Space, Mon-treal, Canada . 2014.[23] Lorraine Fesq et al. “Extended Mis-sion Technology Demonstrations Using theASTERIA Spacecraft”. In: . IEEE. 2019, pp. 1–11.[24] Vandi Verma et al. “Autonomous Sci-ence Restart for the Planned Europa Mis-sion with Lightweight Planning and Ex-ecution”. In:
International Workshop onPlanning and Scheduling for Space (IW-PSS 2017), Pittsburgh, PA . 2017.[25] Gregg Rabideau and Ed Benowitz. “Proto-typing an onboard scheduler for the mars2020 rover”. In:
Proceeding of Interna-tional Workshop on Planning and Schedul-ing for Space, Pittsburgh, PA . 2017.[26] Philippe Labone and Malik Ghallab.“Planning with sharable resource con-straints”. In:
Proceedings of the 14th inter-national joint conference on Artificial in-telligence . Vol. 2. Citeseer. 1995, pp. 1643–1649.[27] Javier Barreiro et al. “EUROPA: Aplatform for AI planning, scheduling,constraint programming, and optimiza-tion”. In: (2012).[28] Conor McGann et al. “T-rex: A model-based architecture for auv control”. In: . Vol. 2007. 2007. [29] Nikolaus Volgenau and Todd Boroson.“Two years of LCOGT operations: thechallenges of a global observatory”. In:
Observatory Operations: Strategies, Pro-cesses, and Systems VI . Vol. 9910. Inter-national Society for Optics and Photonics.2016, p. 99101C.[30] T Boroson et al. “Science operations forLCOGT: a global telescope network”. In:
Observatory Operations: Strategies, Pro-cesses, and Systems V . Vol. 9149. Inter-national Society for Optics and Photonics.2014, 91491E.[31] RA Street et al. “General-purpose soft-ware for managing astronomical observingprograms in the LSST era”. In:
Softwareand Cyberinfrastructure for Astronomy V .Vol. 10707. International Society for Op-tics and Photonics. 2018, p. 1070711.[32] A Bridger. “The ALMA Observing Tool:Proposal Preparation & Submission”. In:
ALMA Early Science Workshop, IRAM .ALMA. 2010.[33] Mark D Johnston. “Spike: Ai schedul-ing for nasa’s hubble space telescope”. In:
Sixth Conference on Artificial Intelligencefor Applications . IEEE. 1990, pp. 184–190.[34] PK Barnes, AT Haddock, and CA Cruzen.“Autonomous Science Operations Tech-nologies for Deep Space Gateway”. In:
Deep Space Gateway Concept ScienceWorkshop . Vol. 2063. 2018.[35] Roger M Smith et al. “The Zwickytransient facility observing system”. In:
Ground-based and Airborne Instrumenta-tion for Astronomy V . Vol. 9147. Interna-tional Society for Optics and Photonics.2014, p. 914779.[36] Stephen B Cenko. “The Gravitational-wave Ultraviolet Counterpart Imager(GUCI) Network”. In:
American As-tronomical Society Meeting Abstracts .Vol. 234. 2019.1237] Judith Racusin. “Current and FutureLarge Missions as Science Motivation forGRB Detecting SmallSats”. In:
Towards aNetwork of GRB Detecting Nanosatellites .2018.[38] Rashied Amini. “Utilizing FractionatedSpace Mission Design and Small Satellitesfor a Next Generation Gamma Ray BurstObservatory”. In: . 2008.[39] Sreeja Nag et al. “Autonomous Schedul-ing of Agile Spacecraft Constellations withDelay Tolerant Networking for ReactiveImaging”. In: .2019.[40] William Gerstenmaier and Jason Cru-san.
Cislunar and Gateway Overview . url : .[41] Jack Burns, Gregg Hallinan, et al. “CDIMCosmic Dawn Intensity Mapper”. In: TheSpace Astrophysics Landscape for 2020and Beyond. Apr. 2019.[42] B Mennesson et al. “Constraining theexozodiacal luminosity function of main-sequence stars: complete results from theKeck Nuller mid-infrared surveys”. In: TheAstrophysical Journal
The Astro-physical Journal . IEEE. 2018, pp. 1–13.[45] Gordon B Aaseng, Adam Sweet, and JohnOssenfort. “Performance Analysis of anAutonomous Fault Management System”.In: . 2018, p. 5149. [46] Jeremy D Frank and Gordon B Aaseng.“Transitioning Autonomous SystemsTechnology Research to a Flight SoftwareEnvironment”. In:
AIAA SPACE 2016 .2016, p. 5530.[47] Ksenia Kolcio, Louis Breger, and PaulZetocha. “Model-based fault managementfor spacecraft autonomy”. In: . IEEE. 2014, pp. 1–14.[48] Ksenia Kolcio and Lorraine Fesq. “Model-based off-nominal state isolation and de-tection system for autonomous fault man-agement”. In: . IEEE. 2016, pp. 1–13.[49] Daniel Gaines et al. “Productivity chal-lenges for mars rover operations”. In:
The26th International Conference on Auto-mated Planning and Scheduling . 2016.[50] Daniel Gaines et al. “Productivity chal-lenges for mars rover operations”. In:
The26th International Conference on Auto-mated Planning and Scheduling . 2016.[51] Ksenia Kolcio, Ryan Mackey, and LorraineFesq. “Model-Based Approach to RoverHealth Assessment-Mars Yard Discover-ies”. In: .IEEE. 2019, pp. 1–12.[52] Fernando Figueroa, Mark Walker, andLauren W Underwood. “NASA Platformfor Autonomous Systems (NPAS)”. In:
AIAA Scitech 2019 Forum . 2019, p. 1963.[53] Richard Levinson et al. “Development andTesting of a Vehicle Management Systemfor Autonomous Spacecraft Habitat Oper-ations”. In: . 2018,p. 5148.[54] J Oc´on et al. “The ERGO frameworkand its use in planetary/orbital scenarios”.In:
Proc. 69th International AstronauticalCongress (IAC), IAF, Bremen, Germany .2018.1355] Richard E. Fikes and Nils J. Nilsson.“Strips: A new approach to the applica-tion of theorem proving to problem solv-ing”. In:
Artificial Intelligence url : .[56] SA Vere. “Deviser-An AI planner forspacecraft operations”. In: (1985).[57] Vandi Verma et al. “Universal-executiveand PLEXIL: engine and language for ro-bust spacecraft control and operations”.In: Space 2006 . 2006, p. 7449.[58] Vandi Verma et al. “Survey of commandexecution systems for NASA spacecraftand robots”. In: (2005).[59] Ksenia Kolcio, Lorraine Fesq, and RyanMackey. “Model-based approach to roverhealth assessment for increased productiv-ity”. In:2017 IEEE Aerospace Conference