March 5th, 2018.
The Inaugural International Workshop on Virtual, Augmented and Mixed Reality for Human-Robot Interaction.
Collocated with HRI 2018
The 1st International Workshop on Virtual, Augmented, and Mixed Reality for Human-Robot Interactions (VAM-HRI) seeks to bring together researchers from different areas of the HRI, Robotics, Artificial Intelligence, and Mixed Reality communities with the goal of identifying and codifying the challenges in the emerging area of mixed reality interactions between humans and robots. While there has been sporadic work in this area during the last decade, with interest mostly confined to industrial applications, recent advances in the space of mixed reality technologies has opened up exciting avenues of research in the field of HRI. This is thus the first workshop of its kind at an academic AI or Robotics conference, and is intended to serve as a timely call to arms to the academic community in response to the growing promise of this emerging field.
The workshop covers contributions across a wide range of topics in Augmented and Virtual Reality for HRI, including but not limited to: |
---|
AR-based intention communication |
AR-based behavior explanation |
AR/VR for robot testing and diagnostics |
VR for HRI human-subject experimentation |
Efficient representations for AR/VR |
Mixed-reality language grounding |
AR-augmented natural language generation |
AR-enabled robot control interfaces |
Hardware/software architectures for AR/VR-based HRI |
HRI problems that can benefit most from emerging AR and VR technologies |
Blair MacIntyre is a Principle Research Scientist in the Emerging Technologies group at Mozilla, and a Professor of Interactive Computing at Georgia Tech. He has been doing AR research and development since 1991, founded the Augmented Environments Lab at Georgia Tech in 1999, and joined Mozilla in 2016. He has been working on bringing AR to the web since 2008, when he started the open-source Argon project at Georgia Tech. At Mozilla, he is leading the effort to bring high performance AR to commodity web browsers. He has worked on AR systems in military, industrial, educational, entertainment, and gaming domains, and consults on technical and legal issues in AR.
Augmented Reality is a technology that overlays digital information on a person’s view of the world around them, to enhance their perception or understanding. Systems that use AR techniques rely on rich knowledge of the world near the viewer; when people are working with robots and other sensor-based systems, AR techniques can be used to leverage and expose otherwise-hidden digital information those systems have. In our work, we have explored how to use AR to communicate sensor information to workers on assembly lines, and the state and plans of multi-robot systems in the Robotarium. We are particularly interesting in leveraging web-based technologies to create AR interfaces, simplifying creation and distribution of platform-independent network-based systems. In this talk I will discuss our past work, and the opportunities WebXR may bring for HRI.
Tom Williams is an assistant professor of computer science at Colorado School of Mines, where he directs the Mines Interactive Robotics Research (MIRROR) Lab. His research focuses on enabling and understanding natural language based human-robot interaction, especially as applied to assistive and search-and-rescue robotics, and has been featured in international AI and Robotics conferences such as HRI, AAAI, AAMAS, IROS, RSS, and INLG, as well as the JHRI and RAS journals. Tom served as program committee co-chair for the 2016 HRI Pioneers workshop and AAAI Fall Symposium on AI-for-HRI (AI-HRI), is a special track session chair for EAAI 2018, and is a Senior Program Committee member for AAAI 2018.
Daniel Szafir is an assistant professor in the Department of Computer Science and ATLAS Institute at the University of Colorado Boulder, where he directs the Interactive Robotics and Novel Technologies (IRON) Lab. His research at the intersection of robotics and human-computer interaction (HCI) focuses on investigating how novel technologies can mediate interactions between people and autonomous systems. His work has been featured in international robotics, virtual/augmented reality, and design conferences including HRI, ISMAR, 3DUI (now merged with IEEE VR), CHI, and DIS as well the International Journal of Robotics Research (IJRR). Daniel has previously served as the Videos and Demos Co-Chair for HRI 2017, an organizing committe member for the RSS 2017 Workshop on Bridging the Gap in Space Robotics, the panel chair for the 2015 HRI Pioneers workshop, and a program committee member for HRI, RO-MAN, CHI, RSS, and ARSO.
Tathagata Chakraborti is a senior Ph.D. student at Arizona State University working in the Yochan Lab with Prof. Subbarao Kambhampati. His research interests include planning with humans in the loop, with applications in task planning for human-robot teaming and cohabitation, and proactive decision support. His research has featured in premier research conferences and workshops in the field of artificial intelligence and robotics worldwide, including AAAI, IJCAI, AAMAS, ICAPS, IROS, ICRA, etc. He has also received the back to back IBM Ph.D. Fellowship and multiple University Graduate Fellowship Awards in recognition of his work. He has been on the organizing team for the Workshop on Multi-agent Interaction without Prior Coordination (MIPC) at AAMAS'17 and Workshop on Explainable AI (XAI) at IJCAI 2017, as well as on the Review Process Committee of IJCAI 2016 and the Program Committee of IJCAI 2018. Recently, he was the team lead of ÆRobotics which featured in the US Finals of the Microsoft Imagine Cup 2017 with an innovative solution for intention projection between humans and robots using augmented reality.
Heni Ben Amor is an Assistant Professor at Arizona State University where he heads the ASU Interactive Robotics Lab. Prior to that, he was a Research Scientist at the Institute for Robotics and Intelligent Machines at GeorgiaTech in Atlanta. Heni studied Computer Science at the University of Koblenz-Landau (GER) and earned a Ph.D in robotics from the Technical University Freiberg and the University of Osaka in 2010. Before moving to the US, Heni was a postdoctoral scholar at the Technical University Darmstadt. Heni's research topics focus on artificial intelligence, machine learning, human-robot interaction, robot vision, and automatic motor skill acquisition. He received the highly competitive Daimler-and-Benz Fellowship, as well as several best paper awards at major robotics and AI conferences. He is in the program committee of various AI and robotics conferences such as AAAI, IJCAI, IROS, and ICRA.
Matthias Scheutz is a professor of cognitive and computer science, and director of the Human-Robot Interaction Laboratory at Tufts University. His research interests include artificial intelligence, cognitive modeling, foundations of cognitive science, human-robot interaction, multi-scale agent-based models and natural language processing.
Subbarao Kambhampati is a professor of Computer Science & Engineering at Arizona State University, where he directs the Yochan research group. His research interests are primarily in Artificial Intelligence, and include planning and decision making, human-robot teaming and human-aware AI. He is an elected Fellow and the current president of the Association for the Advancement of Artificial Intelligence (AAAI).
Bilge Mutlu is an Associate Professor of Computer Science, Psychology, and Industrial Engineering at the University of Wisconsin–Madison. He directs the Wisconsin HCI Laboratory. His research program draws on technical, behavioral, and design perspectives to build human-centered principles for the design of robotic technologies.
William Hoff is an associate professor in the department of Computer Science at the Colorado School of Mines. He teaches courses in computer vision and mobile application development. His research interests include computer vision and pattern recognition, with applications in robotics, video aided navigation, activity recognition, and augmented reality.