Learning Context on a Humanoid Robot using Incremental Latent Dirichlet Allocation

Hande Çelikkanat, Güner Orhan, Nicolas Pugeault, Frank Guerin, Erol Şahin, Sinan Kalkan

Research output: Contribution to journalArticlepeer-review


In this paper, we formalize and model context in terms of a set of concepts grounded in the sensorimotor interactions of a robot. The concepts are modeled as a web using Markov Random Field (MRF), inspired from the concept web hypothesis for representing concepts in humans. On this concept web, we treat context as a latent variable of Latent Dirichlet Allocation (LDA), which is a widely-used method in computational linguistics for modeling topics in texts. We extend the standard LDA method in order to make it incremental so that: 1) it does not relearn everything from scratch given new interactions (i.e., it is online); and 2) it can discover and add a new context into its model when necessary. We demonstrate on the iCub platform that, partly owing to modeling context on top of the concept web, our approach is adaptive, online, and robust: it is adaptive and online since it can learn and discover a new context from new interactions. It is robust since it is not affected by irrelevant stimuli and it can discover contexts after a few interactions only. Moreover, we show how to use the context learned in such a model for two important tasks: object recognition and planning.
Original languageEnglish
Pages (from-to)42-59
Number of pages18
JournalIEEE Transactions on Cognitive and Developmental Systems
Issue number1
Publication statusPublished - Mar 2016


Dive into the research topics of 'Learning Context on a Humanoid Robot using Incremental Latent Dirichlet Allocation'. Together they form a unique fingerprint.

Cite this