NASA Logo

NTRS

NTRS - NASA Technical Reports Server

Back to Results
Natural Language Based Multimodal Interface for UAV Mission PlanningAs the number of viable applications for unmanned aerial vehicle (UAV) systems increases at an exponential rate, interfaces that reduce the reliance on highly skilled engineers and pilots must be developed. Recent work aims to make use of common human communication modalities such as speech and gesture. This paper explores a multimodal natural language interface that uses a combination of speech and gesture input modalities to build complex UAV flight paths by defining trajectory segment primitives. Gesture inputs are used to define the general shape of a segment while speech inputs provide additional geometric information needed to fully characterize a trajectory segment. A user study is conducted in order to evaluate the efficacy of the multimodal interface.
Document ID
20170010118
Acquisition Source
Langley Research Center
Document Type
Conference Paper
Authors
Chandarana, Meghan
(Carnegie-Mellon Univ. Pittsburgh, PA, United States)
Meszaros, Erica L.
(Chicago Univ. Chicago, IL, United States)
Trujillo, Anna
(NASA Langley Research Center Hampton, VA, United States)
Allen, B. Danette
(NASA Langley Research Center Hampton, VA, United States)
Date Acquired
October 16, 2017
Publication Date
October 9, 2017
Subject Category
Cybernetics, Artificial Intelligence And Robotics
Report/Patent Number
NF1676L-26740
Report Number: NF1676L-26740
Meeting Information
Meeting: Human Factors and Ergonomics Society (HFES) International Annual Meeting 2017
Location: Austin, TX
Country: United States
Start Date: October 9, 2017
End Date: October 13, 2017
Sponsors: Human Factors and Ergonomics Society
Funding Number(s)
WBS: WBS 736466.01.08.07.43.50.02
Distribution Limits
Public
Copyright
Public Use Permitted.
No Preview Available