In order to solve tasks such as doing the laundry, a robot needs to compute high-level strategy (should I use the basket?) as well as the joint movements that it should execute. Unfortunately, approaches for making high-level decisions rely on "task planning" abstractions that are lossy and can produce strategies that have no feasible motion plans.
We are developing new methods for dynamically refining the task-planning abstraction to produce combined task and motion plans that are executable.
"Be wise, generalize!"
Planning is well known to be a hard problem. We are developing methods for acquiring useful knowledge while computing plans for small problem instances. This knowledge is then used to aid planning in larger, more difficult problems.
Often, our approaches can extract algorithmic, generalized plans that solve efficiently large classes of similar problems as well as problems with uncertainty in the quantities of objects that the agent needs to work with. The generalized plans we compute are easier to understand and are generated with proofs of correctness.
It would be impractical to reason about multi-step tasks such as setting a table for dinner at the lowest level of modeling (e.g.,individual joint movements). In order to build autonomous systems that help in complex, multi-step tasks (where we may actually need help), we draw upon the best example of intelligent systems we know: humans. As humans we rarely think about where to place our knees and elbows. Instead, we reason about achieving abstracted regions of configurations (e.g. reach the 5fth floor).
Automatically coming up with the right abstraction to solve a problem efficiently is a notoriously challenging problem. We are developing new methods for utilizing abstractions in sequential decision making (SDM), for evaluating the effect of abstractions on models for SDM, as well as to search for abstractions that would aid in solving a given SDM problem.
A real robot never has perfect sensors or actuators. Instead, an intelligent robot needs to be able to solve the tasks assigned to it while handling uncertainty about the environment as well as about the effects of its own actions. This is a challenging computational problem, but also one that humans solve on a routine basis (we don't have perfect sensors or actuators either!).
We are developing new methods for efficiently expressing and solving problems where the agent has limited, incomplete information about the quantities and identities of the objects that it may encounter.