Hierarchical representations of perceptual and sensorimotor information in deep neural networks

The Third International Workshop on Intrinsically Motivated Open-ended Learning (IMOL 2017) (2017) .


Abstract

Designing artificial intelligent agents that are able to learn multiple tasks autonomously, incrementally and online, discovering and solving autonomously new goals and tasks, can be framed as a sequential decision problem. Optimal decisions about what actions the agents should perform in the future depend on building predictive models of the environment, which can be shown to be equivalent to generating computer programs that compress as much as possible the stream of sensorimotor information available to the agent (Hutter, 2004). However, it is not clear how to generate such compressors / models in a generic way that it computationally tractable for realistic problems. Recently, deep neural networks have achieved important successes in classifying and generating data such as images, sound and text, and this implies the underlying generation in these networks of models that represent in a compressed fashion such data. It seems that deep neural networks implicitly exploit heuristics that are relevant for such practical problems because these heuristics fit the laws of physics of our universe (Lin, Tegmark and Rolnik, 2017). The probabilistic framework of deep learning of Patel et al. (2016) shows how deep neural networks generate increasingly abstract representations of the data they process (e.g., images) and how these representations can be used as generative models that are able to predict / simulate such data. We investigate the representational capabilities of deep neural networks in the probabilistic framework of Patel et al. (2016). We also investigate whether such models can represent not only perceptual information such as images, but also sensorimotor information, by generating hierarchical representations of a mix of perceptions and actions from the agent’s sensorimotor history and by using these representations to generate predictive models that are able to simulate the future for the purpose of adaptive action selection.
References:
Hutter, M. (2004). Universal Artificial Intelligence. Berlin: Springer.
Lin, H. W., & Tegmark, M. (2016). Why does deep and cheap learning work so well?. arXiv preprint arXiv:1608.08225.
Patel, A. B., Nguyen, M. T., & Baraniuk, R. (2016). A probabilistic framework for deep learning. In Advances in Neural Information Processing Systems (p. 2558-2566).



Add your rating and review

If all scientific publications that you have read were ranked according to their scientific quality and importance from 0% (worst) to 100% (best), where would you place this publication? Please rate by selecting a range.


0% - 100%

This publication ranks between % and % of publications that I have read in terms of scientific quality and importance.


Keep my rating and review anonymous
Show publicly that I gave the rating and I wrote the review



Notice: Undefined index: publicationsCaching in /www/html/epistemio/application/controllers/PublicationController.php on line 2240