RESEARCH AREAS

Interaction in Virtual, Augmented, and Mixed Reality (VR/AR/MR)

Virtual, augmented, and mixed reality technologies hold great promise in providing users with new ways to interact with information via immersive 3D visuals. We are exploring the capabilities of these technologies and designing novel systems that leverage VR/AR/MR for applications across robotics, computer-supported collaborative work, and visualization.


Modeling Human Intent

For human-robot interactions and collaborations to be successful, robots must have some means of understanding and interpreting human input. We develop methods that enable robots to recognize, computationally model, and comprehend human intent based on a variety of human multimodal cues to facilitate natural and fluid human-robot interactions.


Repurposing Robots

In collaboration with the Superhuman Computing Lab, we are investigating how robots might be repurposed such that their use transcends traditional notions of “put-and-place” or data collection tasks. We envision a wider design space for robotics and are examining robot abilities to support opportunistic tangible input and haptic output for traditional Graphical User Interfaces (GUIs) and act as assistive devices for users with disabilities.


Telerobotics

Despite advances in autonomous and semi-autonomous capabilities, robot deployments still rely on some form of human teleoperation and/or supervision. Moreover, the need for human intervention in robotic operations will remain well into the future. We develop new systems and interfaces that enable remote human operators to control robots more effectively, for instance improving overall task performance, enhancing operator situational awareness, and/or decreasing user cognitive load.