by Alexandra Kirsch and Fan Cheng
Abstract:
Our vision is a pro-active robot that assists elderly or disabled people in everyday activities. Such a robot needs knowledge in the form of prediction models about a person's abilities, preferences and expectations in order to decide on the best way to assist. We are interested in learning such models from observation. We report on a first approach to learn ability models for manipulation tasks and identify some general challenges for the acquisition of human models.
Reference:
Alexandra Kirsch and Fan Cheng, "Learning Ability Models for Human-Robot Collaboration", In Robotics: Science and Systems (RSS) — Workshop on Learning for Human-Robot Interaction Modeling, 2010.
Bibtex Entry:
@inproceedings{kirsch10learning,
author = {Alexandra Kirsch and Fan Cheng},
title = {Learning Ability Models for Human-Robot Collaboration},
booktitle = {Robotics: Science and Systems (RSS) --- Workshop on Learning for Human-Robot Interaction Modeling},
year = {2010},
bib2html_pubtype = {Workshop Paper},
bib2html_groups = {PARA},
bib2html_funding = {CoTeSys},
bib2html_rescat = {Models, Human-Robot Interaction},
bib2html_domain = {Assistive Household},
abstract = {Our vision is a pro-active robot that assists elderly or disabled people in everyday activities. Such a robot needs knowledge in the form of prediction models about a person's abilities, preferences and expectations in order to decide on the best way to assist. We are interested in learning such models from observation. We report on a first approach to learn ability models for manipulation tasks and identify some general challenges for the acquisition of human models.}}