NASA Logo

NTRS

NTRS - NASA Technical Reports Server

Back to Results
Enabling a Voice Management System for Space ApplicationsThe sustainable missions beyond Low Earth Orbit (LEO) envisioned for NASA’s Artemis program will require autonomous capabilities. Moreover, Artemis mission crews will need a means to efficiently interact with a spacecraft’s autonomous systems. This interaction can be facilitated by voice and speech communications because voice-based controls enable users to interact hands- and eyes-free, allowing the user to better focus on critical tasks. The goal of our project was to explore the knowledge and technology needed to successfully design effective Voice User Interfaces (VUIs) for autonomous systems utilizing Human Centered Design (HCD) principles. The focus of the human factors’ aspect of engineering, pays close attention to psychological and physiological principles in the development of autonomous crew operation systems. A main objective was to understand how a crew member, through voice interaction, could efficiently and intuitively communicate with a notional autonomous vehicle system manager. This project was a part of the NASA Moon to Mars eXploration Systems and Habitation (M2M X-Hab) 2020 Academic Innovation Challenge.

The work from the BLiSS Team, at the University of Michigan, resulted in the design of a system persona, Diego, to which an astronaut may quickly build trust with autonomous systems, to alleviate known stressors on mental health expected during long duration space missions. Optimal software to facilitate integration of the system persona into a reference Lunar orbiting Gateway station was defined. Additionally, a Speech to Text (STT) system and a Graphical User Interface (GUI) that could be implemented in future missions was developed on an Internet of Things (IOT) platform.

The Voice User Interface (VUI) design for the M2M X-Hab 2020 project leveraged previous technology developed by the BLiSS team to incorporate a voice-based interface into NASA’s Platform for Autonomous Systems (NPAS) software. This required technologies to convert voice to text, conduct semantic interpretations, and convert responses from the autonomous system to text and to speech; additionally, the spacecraft background noise environment was assessed, a noise mitigation technique was developed, and a relatable personality for the autonomous system was developed in order to facilitate human-like conversations.

The success of our effort was largely due to the diversity of the team that included expertise in Space Systems Engineering, Human Computer Interaction, Aerospace Engineering, Computer Science, Biomedical Engineering, and Applied Physics. The diverse perspectives fostered elaborate discussions, resulting in the conception of three main subsystems: (1) User-System, (2) NPAS-System, and (3) Environment-System. The VUI was unique and had to be efficient and intuitive. For this project, 5 subteams were formed, each with a separate objective, Voice Design team, Background Noise Mitigation team, Software Integration team and Graphical User Interface team. The BLiSS team crafted a personality for the VUI to enable human-like conversation and drive user adoption and trust. User surveys were completed and used to help determine the required VUI system personality traits by capturing perspectives and expectations of prospective “Artemis Generation Astronauts”. To further simulate human-like conversations, the system had to be able to quickly interpret user speech and be able to integrate with NASA’s NPAS platform for quick and reliable information transfer. The outcomes of our research were: (1) a working prototype user interface, that is compatible with NASA’s NPAS platform; (2) software that demonstrates the ability of the VUI system to interpret user requests and respond appropriately; (3) the capability to implement fully expanded conversations between user and system using intuitive communication in four request categories; and (4) software and hardware recommendations that optimize the system’s ability to operate in a noisy environment. Our research has laid the foundation for the development of VUI’s for autonomy, and provides a baseline for future VUI developments.
Document ID
20210022373
Acquisition Source
Stennis Space Center
Document Type
Conference Paper
Authors
Tara Vega
(University of Michigan–Ann Arbor Ann Arbor, Michigan, United States)
Ariana Bueno
(University of Michigan–Ann Arbor Ann Arbor, Michigan, United States)
Chad Cerutti
(University of Michigan–Ann Arbor Ann Arbor, Michigan, United States)
Haoran Chang
(University of Michigan–Ann Arbor Ann Arbor, Michigan, United States)
Matthew Garvin
(University Space Research Association Ann Arbor, Michigan, United States)
Catalina Garza
(University of Michigan–Ann Arbor Ann Arbor, Michigan, United States)
Parker Kurlander
(University of Michigan–Ann Arbor Ann Arbor, Michigan, United States)
Nilton Renno
(University of Michigan–Ann Arbor Ann Arbor, Michigan, United States)
Fernando Figueroa
(Stennis Space Center Bay Saint Louis, Mississippi, United States)
Lauren Underwood
(Stennis Space Center Bay Saint Louis, Mississippi, United States)
Date Acquired
October 5, 2021
Subject Category
Cybernetics, Artificial Intelligence And Robotics
Meeting Information
Meeting: IEEE Aerospace Conference
Location: Big Sky, Montana
Country: US
Start Date: March 5, 2022
End Date: March 12, 2022
Sponsors: Institute of Electrical and Electronics Engineers
Funding Number(s)
CONTRACT_GRANT: 80NSSCC18M0035
CONTRACT_GRANT: 80NM0018D0004P00002
Distribution Limits
Public
Copyright
Public Use Permitted.
Technical Review
External Peer Committee
Keywords
Voice user interface
Autonomy voice interface
Voice interface for autonomous systems
Voice interface for space systems
No Preview Available