The workshop will take place in conjunction with the
Thirteenth International Conference on Intelligent Virtual Agents (IVA 2013)
2013, Edinburgh, UK August 28, 2013


The major goal in human computer interaction (HCI) research and applications is to improve the interaction between humans and computers by making computers adaptable to the user’s situation, for instance to his intentions, needs, or affective state. Therefore the future of human computer interaction research will focus more on the user himself, and in order to build intelligent HCI, questions such as, how to perceive the user’s state, or, how computers should adapt to the user state, need to be answered. Consequently, the computer as a counterpart to its user is becoming more a companion. Current communication is driven by technology. However, since as technical systems penetrate more and more areas of everyday life they have to infer also social aspects, as they are getting a permanent companion. Technical companion systems should be designed in such a way that they are generally accepted, are adapted to the individual characteristics of the user and offer an individualized support.

To achieve this goal such a system should not be limited to mere communication capabilities. It should be able to recognize intentions, dispositions (like stressed, concentrated, attentive) and individual characteristics of the user. This also includes the evaluation of the situation, in which the user is settled, the assessment of his reaction regarding to the relevance for the current interaction and the decision about a suitable way of interaction. In terms of knowledge resources required to perform research in this field, the focus will progressively change from expressive acted data to non-acted material consisting of non-pure emotions with low expressiveness. As a consequence, applied recognition methods have to deal with these difficulties. Multiple modalities need to be inferred by using fusion techniques and deploying additional features like paralinguistics.

From the multiple user input the companion generates hypotheses of how to interact in a proper way. This allows the interaction course to be influenced. Moreover, the companion must be able to analyze the interaction process and to generate a model to build a comprehensive knowledge of the user. Hence, dialogue courses may be classified afterwards and utilized to influence successive interactions.

To enable such systems, different research areas have to work together and combine their efforts. This workshop will provide a platform for methods, approaches and techniques in recognition and synthesis of dispositional characteristics in speech, speech-content, facial expression, gestures and other modalities. According to multiple modalities, fusion techniques have to be considered in both processes.

This workshop complements the main conference by offering a platform that is focused on the emerging topic of companion-like systems and defines a form for discuss of issues on HCI, pattern recognition and affective computing.

Workshop topics should be focused on companion technologies, but are not limited to:
  • Analyses of speech/speech content and prosody enabling
  • Facial expression and gesture analyses
  • Fusion techniques
  • Multi-modal recognition of user's disposition
  • Multi-modal recognition of the user's situation
  • Synthesis of system reaction with respect to companion-like characteristics
  • Analyses of adapted system reactions
  • Applications of companion-like HCI

Authors of accepted papers are invited to submit an extended version of their work to a special issue on “Pattern recognition methods in HCI” of the Pattern Recognition Letters Journal.