Modeling Human Intent


For human-robot interactions and collaborations to be successful, robots must have some means of understanding and interpreting human input. We develop methods that enable robots to recognize, computationally model, and comprehend human intent based on a variety of human multimodal cues to facilitate natural and fluid human-robot interactions.


Card image cap
Active Learning in Shared Autonomy

In this project, we introduce the concept of active learning into shared autonomy by developing a method for systems to leverage information gathering: minimizing the system’s uncertainty about user goals by moving to information-rich states to observe user input. We create a framework for balancing information gathering actions, which help the system gain information about user goals, with goal-oriented actions, which move the robot towards the goal the system has inferred from the user.

Go to Project

Get Involved

Interested in this research area?

Get Involved