Defense in english
Zoom Link: https://univ-tlse3-fr.zoom.us/j/93805472327?pwd=ZXFwaFVISENObXRlV0JXMFlReXM3UT09
- ID : 938 0547 2327
- Code secret : 073857
Supervisors : Martin Giurfa and Aurore Avarguès-Weber
Committee members :
- Ludovic Dickel
- Elisa Frasnelli
- Stéphane Viollet
- Aurore Avarguès-Weber
- Martin Giurfa
Equipped with a brain smaller than one cubic millimeter and containing ~950,000 neurons, honeybees display a rich behavioral repertoire, among which appetitive learning and memory play a fundamental role in the context of foraging activities. Besides elemental forms of learning, where bees learn specific association between environmental features, bees also master different forms of non-elemental learning, both in the visual and in the olfactory domain, including categorization, contextual learning and rule abstraction. These characteristics make them an ideal model for the study of visual learning and to explore the neural mechanisms underlying their learning abilities. In order to access the working brain of a bee during a visual learning tasks the insect needs to be immobilized. Hence, virtual reality (VR) setups have been developed to allow bees to behave within a virtual world, while remaining stationary within the real world. During my phd, I developed a flexible and open source 3D VR software to study visual learning, and used it to improve existing conditioning protocols in a virtual environment and to investigate the neural mechanism of visual learning.
Investigating the influence of optic flow on associative color learning I found that increased motion cues from the background impaired the bees’ performance. Which lead me to identify issues that may affect decision-making in VR landscapes, which require specific control by experimenters.
By means of the VR setup, I induced visual learning in tethered bees and quantified immediate early gene (IEG) expression in specific areas of their brain to detect regions involved in visual learning. In particular, I focused on kakusei, Hr38 and Egr1, three IEGs that have been related to bee foraging and orientation and thus may also be relevant when making appetitive visual association. This analysis suggests that the mushroom bodies are involved in associative color learning.
Finally, I explored the possibility of using the VR on other insect models and performed differential conditioning on bumblebees. Interestingly, not only bumblebees are able to solve this cognitive task as well as the honeybees, but they also engage more with the virtual environment, leading to a lower ratio of discarded individuals. These results indicate that the VR protocols I have established through the course of this PhD may be applied to other insects, and that the bumblebee is a good candidate for the study of visual learning under VR conditions.