Stellenausschreibung Juniorprofessur (W1) für Uncertainty Quantification
The visual recognition of objects by humans in everyday life is typically rapid and effortless. Until very recently, animate visual systems were the only ones capable of this remarkable computational achievement. However, now a class of computer vision algorithms called Deep Neural Networks (DNNs) exists, achieving human-level classification performance on large sets of images of objects. Furthermore, a growing number of studies report similarities in the way DNNs and the human visual system process objects, suggesting that current DNNs may also be good models of human visual object recognition. However, there clearly exist vast architectural and processing differences between current DNNs and the primate visual system, but the behavioural consequences of the differences are not yet well understood. Here we show that three current DNNs are still not as robust as the human visual system in their generalisation ability in a multi-task setting. We found the human visual system not only to be more robust to image degradations like contrast reduction, additive noise or eidolon-distortions, but in addition we found progressively diverging classification error-patterns between man and DNNs with weaker signals. Our results demonstrate that there are still marked differences in the way man and current DNNs process object information.
see also the list of SFB Colloquia