NASA Logo

NTRS

NTRS - NASA Technical Reports Server

Back to Results
Automated Generation and Assessment of Autonomous Systems Test CasesThis slide presentation reviews some of the issues concerning verification and validation testing of autonomous spacecraft routinely culminates in the exploration of anomalous or faulted mission-like scenarios using the work involved during the Dawn mission's tests as examples. Prioritizing which scenarios to develop usually comes down to focusing on the most vulnerable areas and ensuring the best return on investment of test time. Rules-of-thumb strategies often come into play, such as injecting applicable anomalies prior to, during, and after system state changes; or, creating cases that ensure good safety-net algorithm coverage. Although experience and judgment in test selection can lead to high levels of confidence about the majority of a system's autonomy, it's likely that important test cases are overlooked. One method to fill in potential test coverage gaps is to automatically generate and execute test cases using algorithms that ensure desirable properties about the coverage. For example, generate cases for all possible fault monitors, and across all state change boundaries. Of course, the scope of coverage is determined by the test environment capabilities, where a faster-than-real-time, high-fidelity, software-only simulation would allow the broadest coverage. Even real-time systems that can be replicated and run in parallel, and that have reliable set-up and operations features provide an excellent resource for automated testing. Making detailed predictions for the outcome of such tests can be difficult, and when algorithmic means are employed to produce hundreds or even thousands of cases, generating predicts individually is impractical, and generating predicts with tools requires executable models of the design and environment that themselves require a complete test program. Therefore, evaluating the results of large number of mission scenario tests poses special challenges. A good approach to address this problem is to automatically score the results based on a range of metrics. Although the specific means of scoring depends highly on the application, the use of formal scoring - metrics has high value in identifying and prioritizing anomalies, and in presenting an overall picture of the state of the test program. In this paper we present a case study based on automatic generation and assessment of faulted test runs for the Dawn mission, and discuss its role in optimizing the allocation of resources for completing the test program.
Document ID
20090035797
Acquisition Source
Jet Propulsion Laboratory
Document Type
Conference Paper
External Source(s)
Authors
Barltrop, Kevin J.
(Jet Propulsion Lab., California Inst. of Tech. Pasadena, CA, United States)
Friberg, Kenneth H.
(California Inst. of Tech. Pasadena, CA, United States)
Horvath, Gregory A.
(Jet Propulsion Lab., California Inst. of Tech. Pasadena, CA, United States)
Date Acquired
August 24, 2013
Publication Date
March 3, 2008
Subject Category
Computer Systems
Meeting Information
Meeting: IEEE Aerospace Conference
Location: Big Sky, MT
Country: United States
Start Date: March 1, 2008
End Date: March 8, 2008
Sponsors: Institute of Electrical and Electronics Engineers
Distribution Limits
Public
Copyright
Other
Keywords
Testing challenges
Dawn Fault Protection (DFP)
Dawn

Available Downloads

There are no available downloads for this record.
No Preview Available