Things that Make Robots Go HMMM : Heterogeneous Multilevel Multimodal Mixing to Realise Fluent, Multiparty, Human-Robot Interaction

Daniel Davison, Binnur Gorer, Jan Kolkmeier, Johannes Maria Linssen, Bob R. Schadenberg, Bob van de Vijver, Nick Campbell, Edwin Dertien, Dennis Reidsma

    Research output: Chapter in Book/Report/Conference proceedingConference contributionAcademicpeer-review

    87 Downloads (Pure)

    Abstract

    Fluent, multi-party, human-robot interaction calls for the mixing of deliberate conversational behaviour and re- active, semi-autonomous behaviour. In this project, we worked on a novel, state-of-the-art setup for realising such interactions. We approach this challenge from two sides. On the one hand, a dialogue manager requests deliberative behaviour and setting parameters on ongoing (semi)autonomous behaviour. On the other hand, robot control software needs to translate and mix these deliberative and bottom-up behaviours into consistent and coherent motion. The two need to collaborate to create behaviour that is fluent, naturally varied, and well-integrated. The resulting challenge is that, at the same time, this behaviour needs to conform to both high level requirements and to content and timing that are set by the dialogue manager. We tackled this challenge by designing a framework which can mix these two types of behaviour, using AsapRealizer, a Behaviour Markup Language realiser. We call this Heterogeneous Multilevel Mul- timodal Mixing (HMMM). Our framework is showcased in a scenario which revolves around a robot receptionist which is able to interact with multiple users.
    Original languageEnglish
    Title of host publicationProceedings of eNTERFACE '16
    EditorsKhiet P. Truong, Dennis Reidsma
    Place of PublicationEnschede
    PublisherTelematica Instituut / CTIT
    Pages6-20
    Number of pages15
    Publication statusPublished - 2017
    EventeNTERFACE’16 - 12th Summer Workshop on Multimodal Interfaces - Enschede, Netherlands
    Duration: 18 Jul 201612 Aug 2016
    Conference number: 12

    Course

    CourseeNTERFACE’16 - 12th Summer Workshop on Multimodal Interfaces
    CountryNetherlands
    CityEnschede
    Period18/07/1612/08/16

    Fingerprint

    Human robot interaction
    Managers
    Robots
    Markup languages

    Cite this

    Davison, D., Gorer, B., Kolkmeier, J., Linssen, J. M., Schadenberg, B. R., van de Vijver, B., ... Reidsma, D. (2017). Things that Make Robots Go HMMM : Heterogeneous Multilevel Multimodal Mixing to Realise Fluent, Multiparty, Human-Robot Interaction. In K. P. Truong, & D. Reidsma (Eds.), Proceedings of eNTERFACE '16 (pp. 6-20). Enschede: Telematica Instituut / CTIT.
    Davison, Daniel ; Gorer, Binnur ; Kolkmeier, Jan ; Linssen, Johannes Maria ; Schadenberg, Bob R. ; van de Vijver, Bob ; Campbell, Nick ; Dertien, Edwin ; Reidsma, Dennis. / Things that Make Robots Go HMMM : Heterogeneous Multilevel Multimodal Mixing to Realise Fluent, Multiparty, Human-Robot Interaction. Proceedings of eNTERFACE '16. editor / Khiet P. Truong ; Dennis Reidsma. Enschede : Telematica Instituut / CTIT, 2017. pp. 6-20
    @inproceedings{79b3438274d94372b576e3403984a0d6,
    title = "Things that Make Robots Go HMMM : Heterogeneous Multilevel Multimodal Mixing to Realise Fluent, Multiparty, Human-Robot Interaction",
    abstract = "Fluent, multi-party, human-robot interaction calls for the mixing of deliberate conversational behaviour and re- active, semi-autonomous behaviour. In this project, we worked on a novel, state-of-the-art setup for realising such interactions. We approach this challenge from two sides. On the one hand, a dialogue manager requests deliberative behaviour and setting parameters on ongoing (semi)autonomous behaviour. On the other hand, robot control software needs to translate and mix these deliberative and bottom-up behaviours into consistent and coherent motion. The two need to collaborate to create behaviour that is fluent, naturally varied, and well-integrated. The resulting challenge is that, at the same time, this behaviour needs to conform to both high level requirements and to content and timing that are set by the dialogue manager. We tackled this challenge by designing a framework which can mix these two types of behaviour, using AsapRealizer, a Behaviour Markup Language realiser. We call this Heterogeneous Multilevel Mul- timodal Mixing (HMMM). Our framework is showcased in a scenario which revolves around a robot receptionist which is able to interact with multiple users.",
    author = "Daniel Davison and Binnur Gorer and Jan Kolkmeier and Linssen, {Johannes Maria} and Schadenberg, {Bob R.} and {van de Vijver}, Bob and Nick Campbell and Edwin Dertien and Dennis Reidsma",
    year = "2017",
    language = "English",
    pages = "6--20",
    editor = "Truong, {Khiet P.} and Dennis Reidsma",
    booktitle = "Proceedings of eNTERFACE '16",
    publisher = "Telematica Instituut / CTIT",

    }

    Davison, D, Gorer, B, Kolkmeier, J, Linssen, JM, Schadenberg, BR, van de Vijver, B, Campbell, N, Dertien, E & Reidsma, D 2017, Things that Make Robots Go HMMM : Heterogeneous Multilevel Multimodal Mixing to Realise Fluent, Multiparty, Human-Robot Interaction. in KP Truong & D Reidsma (eds), Proceedings of eNTERFACE '16. Telematica Instituut / CTIT, Enschede, pp. 6-20, eNTERFACE’16 - 12th Summer Workshop on Multimodal Interfaces, Enschede, Netherlands, 18/07/16.

    Things that Make Robots Go HMMM : Heterogeneous Multilevel Multimodal Mixing to Realise Fluent, Multiparty, Human-Robot Interaction. / Davison, Daniel; Gorer, Binnur; Kolkmeier, Jan; Linssen, Johannes Maria; Schadenberg, Bob R.; van de Vijver, Bob; Campbell, Nick; Dertien, Edwin; Reidsma, Dennis.

    Proceedings of eNTERFACE '16. ed. / Khiet P. Truong; Dennis Reidsma. Enschede : Telematica Instituut / CTIT, 2017. p. 6-20.

    Research output: Chapter in Book/Report/Conference proceedingConference contributionAcademicpeer-review

    TY - GEN

    T1 - Things that Make Robots Go HMMM : Heterogeneous Multilevel Multimodal Mixing to Realise Fluent, Multiparty, Human-Robot Interaction

    AU - Davison, Daniel

    AU - Gorer, Binnur

    AU - Kolkmeier, Jan

    AU - Linssen, Johannes Maria

    AU - Schadenberg, Bob R.

    AU - van de Vijver, Bob

    AU - Campbell, Nick

    AU - Dertien, Edwin

    AU - Reidsma, Dennis

    PY - 2017

    Y1 - 2017

    N2 - Fluent, multi-party, human-robot interaction calls for the mixing of deliberate conversational behaviour and re- active, semi-autonomous behaviour. In this project, we worked on a novel, state-of-the-art setup for realising such interactions. We approach this challenge from two sides. On the one hand, a dialogue manager requests deliberative behaviour and setting parameters on ongoing (semi)autonomous behaviour. On the other hand, robot control software needs to translate and mix these deliberative and bottom-up behaviours into consistent and coherent motion. The two need to collaborate to create behaviour that is fluent, naturally varied, and well-integrated. The resulting challenge is that, at the same time, this behaviour needs to conform to both high level requirements and to content and timing that are set by the dialogue manager. We tackled this challenge by designing a framework which can mix these two types of behaviour, using AsapRealizer, a Behaviour Markup Language realiser. We call this Heterogeneous Multilevel Mul- timodal Mixing (HMMM). Our framework is showcased in a scenario which revolves around a robot receptionist which is able to interact with multiple users.

    AB - Fluent, multi-party, human-robot interaction calls for the mixing of deliberate conversational behaviour and re- active, semi-autonomous behaviour. In this project, we worked on a novel, state-of-the-art setup for realising such interactions. We approach this challenge from two sides. On the one hand, a dialogue manager requests deliberative behaviour and setting parameters on ongoing (semi)autonomous behaviour. On the other hand, robot control software needs to translate and mix these deliberative and bottom-up behaviours into consistent and coherent motion. The two need to collaborate to create behaviour that is fluent, naturally varied, and well-integrated. The resulting challenge is that, at the same time, this behaviour needs to conform to both high level requirements and to content and timing that are set by the dialogue manager. We tackled this challenge by designing a framework which can mix these two types of behaviour, using AsapRealizer, a Behaviour Markup Language realiser. We call this Heterogeneous Multilevel Mul- timodal Mixing (HMMM). Our framework is showcased in a scenario which revolves around a robot receptionist which is able to interact with multiple users.

    M3 - Conference contribution

    SP - 6

    EP - 20

    BT - Proceedings of eNTERFACE '16

    A2 - Truong, Khiet P.

    A2 - Reidsma, Dennis

    PB - Telematica Instituut / CTIT

    CY - Enschede

    ER -

    Davison D, Gorer B, Kolkmeier J, Linssen JM, Schadenberg BR, van de Vijver B et al. Things that Make Robots Go HMMM : Heterogeneous Multilevel Multimodal Mixing to Realise Fluent, Multiparty, Human-Robot Interaction. In Truong KP, Reidsma D, editors, Proceedings of eNTERFACE '16. Enschede: Telematica Instituut / CTIT. 2017. p. 6-20