Mechanisms of representation transfer

cyvy Research Project

This project focuses on investigating new ways of transferring characteristics of the human visual system to artificial neural networks, with the aim of making them more robust against changes in image features. These can include, for example, changes in image style that do not alter the image's content. At present, no learning algorithm is capable of robustly generalize what it has learned to other untrained image features. Artificial neural networks quickly make mistakes when the image changes even slightly, for instance when noise is added or style changes are made. Humans have no problems recognizing the content of an image in such instances. Even if most of us grow up under the influence of a certain environment with specific visual characteristics (such as the Black Forest), our visual system easily generalizes to completely different environments (such as a desert environment or a painting).

 

Previous work has shown that deep artificial neural networks use very different image features for decision making than our visual system. For example, while we usually categorize objects by their shape, these networks rely mainly on local patterns in the images. It is still very difficult to incorporate the image features humans use to perceive into artificial systems, as we simply know too little about the exact properties of biological systems.

 

This is why we want to develop mechanisms that can transfer robust features directly from measurements of brain activity to artificial systems. Under controlled conditions, we will first investigate the mechanisms with which these features can be transferred between networks. In the final phase of the project, we will use publicly available measurements of neural activity from the visual system to test which of the neural properties can be transferred to artificial networks using the methods we have developed.