Recent advances in AI and robotics have led to a resurgence of interest in the objective of producing intelligent agents that help us in our daily lives. Such agents must be able to rapidly adapt to the changing goals of their users, and the changing environments in which they operate.
These requirements lead to a balancing act that most current systems have difficulty contending with: on the one hand, human interaction and computational scalability favor the use of abstracted models of problems and environments domains; on the other hand, generating goal directed behavior in the real world typically requires accurate models that are difficult to obtain and computationally hard to reason with.
This symposium addresses the core research gaps that arise in
designing autonomous systems that execute their actions in complex
environments using imprecise models. The sources of imprecision may
range from computational pragmatism to imperfect knowledge of the
actual problem domain. Some of the research directions that this
symposium aims to highlight are:
We invite submissions of full papers (6-8 pages) and short/position papers (2-4 pages). We also solicit system demonstrations which highlight how some of the challenges of interest to this symposium were handled.
Papers should be submitted via the easychair portal.
Technically, symposium papers are not considered archival, and a transfer of copyright to AAAI is not required. Symposium authors are free to submit their work to other venues, but they should always check with that venue to be sure they have no problem with it.
Please register here to participate in the symposium.Paper submission | |
Notification | November 27, 2017 |
Camera-ready deadline | January 23, 2018 |
Validation of Hierarchical Plans via Parsing of Attribute Grammars | ||||||||||||||||
Situated Planning for Execution Under Temporal Constraints | ||||||||||||||||
Flexible Goal-directed Agents' Behavior via DALI MASs and ASP Modules | ||||||||||||||||
Creating and Using Tools in a Hybrid Cognitive Architecture | ||||||||||||||||
Perspectives on the Validation and Verification of Machine Learning Systems in the Context of Highly Automated Vehicles | ||||||||||||||||
SiRoK: Situated Robot Knowledge - Understanding the Balance Between Situated Knowledge and Variability | ||||||||||||||||
Teaching Virtual Agents to Perform Complex Spatial-Temporal Activities | ||||||||||||||||
Adversarial Regression for Stealthy Attacks in Cyber Physical Systems | ||||||||||||||||
Planning Hierarchies and their Connections to Language Learning Generalized Reactive Policies using Deep Neural Networks | Optimal LTL Planning for Multi-Valued Logics | Constraint-Based Online Transformation of Abstract Plans into Executable Robot Actions | Learning to Act in Partially Structured Dynamic Environment | Represention, Use, and Acquisition of Affordances in Cognitive Systems | Learning Planning Operators from Episodic Traces | Human-Agent Teaming as a Common Problem for Goal Reasoning | Interaction and Learning in a Humanoid Robot Magic Performance | A Framework for Complementary Video-Game Companion Character Behavior | Reasoning About Domains with PDDL | On Chatbots Exhibiting Goal-Directed Autonomy in Dynamic Environments | Safe Goal-Directed Autonomy and the Need for Sound Abstractions | Exploiting Micro-Clusters to Close The Loop in Data-Mining Robots for Human Monitoring | Robot Behavioral Exploration and Multi-modal Perception using Dynamically Constructed Controllers | Learning Abstractions by Transferring Abstract Policies to Grounded State Spaces | Information-Efficient Model Identification for Tensegrity Robot Locomotion | |
Goal reasoning (GR) allows an agent to dynamically reason about the relative utilities of goals it could pursue, which may result in changing its active goals. We have studied processes that support GR, focusing on situation assessment and decision making, and its application in (simulated and real) deliberative control tasks. I will present a simple integrated model for GR, review a progression of our work and how it relates to SIRLE’s themes, and summarize some of our current research objectives.
Dr. David Aha leads the Adaptive Systems Section within NRL’s Navy Center for Applied Research in AI. His research interests include intelligent (e.g., goal reasoning) agents, planning, case-based reasoning, explainable AI (XAI), machine learning, and related topics. He co-organized 35 international events related to these topics (e.g., ICCBR-17, AAAI-18 Senior Member Track, FAIM-18 XAI Workshop), launched the UCI Repository for ML Databases, served as a AAAI Councilor, co-created the AAAI Video Competition, received 5 publication awards, and gave the IAAI-17 Engelmore Memorial Lecture. He has led 4 DARPA (e.g., XAI) or ONR evaluation teams. His group regularly hosts post-doctoral researchers and many summer visitors.
We discuss the opportunities for autonomous systems to perform reflection on their planners by adapting the models used to build plans. We first describe model-based planning systems, a form of automated planning system driven by declarative models of the planning domain. These models include descriptions of the conditions and effects of actions on the state of the world. When planning the activities of cyber-physical systems, the command and data representation of the system must be formally abstracted to the actions and states described in the planning system model. When the execution of a plan either fails or produces unexpected outcomes, the execution trace can be abstracted and compared to the predicted state according to the planning model, producing a list of discrepancies; these discrepancies can then be used to fix the model. This provides part of a reflection capability, namely, a set of well-formed problems with the domain model, the abstractions, or both. The challenge lies in the rest of the reflection capability, namely, a set of techniques for changing the models or the abstractions. We discuss these challenges and describe some of the options for addressing them.
Dr. Jeremy Frank is the many-times great grandson of the infamous Dr. Victor Frankenstein; you could say that Artificial Intelligence runs in the family. While unable to completely live down the infamous family name, Dr. Frank has nevertheless managed to bury the most unsavory parts of his history, and has made a minor name for himself by writing some obscure papers in the areas of computer science and AI, while moonlighting as a rocket scientist for a little-known space exploration agency that resides within a bloated, Byzentine bureacracy that poorly serves a large Western hemisphere country. Dr. Frank strives one day to abandon these fruitless pursuits and give over his time to his passions of writing fiction, birding, and cooking, but in the meantime, has shown some small skill in organizing people and fundraising.
In other words, Dr. Jeremy Frank is the Group Lead of Planning and Scheduling Group, in the Intelligent Systems Division, at NASA Ames Research Center. He received his Ph. D. from the Department of Computer Science, at the University of California at Davis, in June 1997. He also has a B.A. in Mathematics from Pomona College. Dr. Frank’s work involves the development of automated planning and scheduling systems for use in space mission operations, the integration of technologies for planning, plan execution, and fault detection for space applications, and the development of technology to enable astronauts to autonomously operate spacecraft. Dr. Frank has published over 50 conference papers, nine journal papers, and three book chapters, and received over 40 NASA awards, including the Exceptional Achievement Medal, the Silver Snoopy, and the NASA Engineering and Safety Center Award.
(Thanks to Jeremy for offering an alternative bio!)Siddharth Srivastava | Arizona State University |
Shiqi Zhang | Cleveland State University |
Nick Hawes | University of Oxford |
Erez Karpas | Technion – Israel Institute of Technology |
George Konidaris | Brown University |
Matteo Leonetti | University of Leeds |
Mohan Sridharan | The University of Auckland |
Jeremy Wyatt | University of Birmingham |
Christopher Amato | Northeastern University |
J. Benton | NASA Ames Research Center / AAMU-RISE |
Joydeep Biswas | University of Massachusetts Amherst |
Minh Do | NASA Ames Research Center |
Esra Erdem | Sabanci University |
Georgios Fainekos | Arizona State University |
Alberto Finzi | Universita' di Napoli Federico II |
Michael Gelfond | Texas Tech University |
Marc Hanheide | University of Lincoln |
Laura Hiatt | U.S. Naval Research Laboratory |
Luca Iocchi | Sapienza University of Rome |
Leslie Kaelbling | MIT |
Sven Koenig | University of Southern California |
Lars Kunze | University of Oxford |
Bruno Lacerda | University of Oxford |
Gerhard Lakemeyer | RWTH Aachen University |
Daniele Magazzeni | King's College London |
Lenka Mudrova | University of Birmingham |
Tim Niemueller | RWTH Aachen University |
Andrea Orlandini | National Research Council of Italy (ISTC-CNR) |
Federico Pecora | Orebro University |
Subramanian Ramamoorthy | The University of Edinburgh |
Mark Roberts | Naval Research Laboratory |
Alessandro Saffiotti | Orebro University |
Enrico Scala | ANU Research School in Computer Science |
Jivko Sinapov | Tufts University |
Sylvie Thiebaux | ANU |
Yu Zhang | Arizona State University |
Shlomo Zilberstein | University of Massachusetts Amherst |