Most of the projects in the brAIn lab fall into three (non-exclusive) categories: experiment, data analysis and computational modelling.
You might for example study a specific computer game using our lab equipment, or develop some novel machine-learning model to be able to tackle some specific brain mechanisms and signals.
If you are interested in software development, the lab is also developing a framework for multi-modal data collection, storage and analysis.

The remainder of this page contains a few ideas for potential Master’s and Bachelor’s thesis in the lab. We try our best to keep this page updated; however, there might be some outdated projects on this page and some new ideas not present here. If you are interested in these topics, please do not hesitate to contact us, you can find our email on the contacts page.

Investigate the application of deep learning to encode psychophysiological signals (e.g. EEG, Gaze, Blood Pulse) and model human brain behaviour. 

  • Stable graph inference in EEG signal
    Using graph neural networks, we can infer a latent graph structure of the signal, allowing us to use graph analysis to understand the trained model. Turning this into a reliable method of analysing EEG signals would be a huge development in the field! Several problems remain though; to begin with, the learned graph is unstable over different model initialisations, which is critical to make the method reliable. We must develop ways to converge the model on the stable graph structures given the same dataset.
  • Discretised raw EEG transformer
    Transformer models have through the powerful attention mechanism shown themselves to be strong models of sequences. They are, however, mainly optimised for discrete events (typically nodes) whereas EEG comes in the form of continuous data. Therefore, applying the transformer architecture on raw EEG is not optimal. The goal of this project is to discretise the signal using an information-theoretical technique called permutation entropy. This would possibly allow better generalisation and performance of the model, harnessing the strength of transformers.
  • Explainable EEG transformer
    Transformers have been shown to be excellent models of word sequences. Recently, we have seen them be used on EEG data with reasonable performance. With some clever engineering, we can make the models much more explainable, allowing us to understand the important features of the EEG signal. This is an incredibly important development, if neuroscientists want to use deep learning models to answer scientific questions about the brain. Can we leverage this technique to explain some cognitive phenomena?
  • Transformers for EEG graph inference
    Transformers’ attention mechanism make them mathematically equivalent to fully connected graph neural networks since attention connects every element to every other element by default. Thereby, we are actually learning a graph structure, when we are tuning the attention weights. Can we adapt the transformer models to make them understandable as graph-learning models? This would achieve a level of explainability for the researchers not seen before in transformers.
  • EEG private autoencoders
    EEG signals from different people can look very different, mainly because our brains work in different ways. Learning a good shared representation across people would help the analysis of EEG signals greatly, and we think a private encoding layer would solve this problem. The dataset and/or model to be worked on here is undecided.
  • EEG transfer learning
    A big problem in using deep learning on EEG data is the access to large amounts of training data, a small lab is simply not able to collect the massive amounts of data needed to train complex models. In other areas of deep learning (like computer vision and language models) transfer learning and fine-tuning are commonly used techniques. Is it possible to do the same with EEG? That is, can we take some large existing datasets and learn the general structure of EEG signals, and then transfer that knowledge to a smaller dataset and increase performance?
  • MoviEEG
    There is a big need in neuroscience for larger EEG datasets with well-structured inputs. However, EEG experiments can be boring and straining, making them difficult to complete for longer than an hour at a time. What if we could turn the not-so-great lab experience into a nice movie night? Then an experiment could easily last 2 hours! What we need is a couple of movies that are well-labelled, with cuts and scene descriptions and dialogue. Then we can simply play the movie and create a highly needed dataset meanwhile!

Create computational models of player experience.

  • Can we detect and qualify different players’ emotions and cognitive states while they are playing?
  • Can we adapt the gameplay? Can we generate new content driven by the experience? (generative AI in games)
  • GamEEG
    There is a big need in neuroscience for larger EEG datasets with well-structured inputs. However, EEG experiments can be boring and straining, making them difficult to complete for longer than an hour at a time. What if we could turn the not-so-great lab experience into a gaming session? Then an experiment could easily last 2 hours! We would need to develop our own games or use some open-source games, so we can record every event happening in the game. Then we can ask people to come and play some games and record their brain activity meanwhile, creating a highly needed dataset!
  • How can confusion be leveraged to prevent boredom in a game or learning situation?
  • How can theories from learning and education be used in games to gain insight into the learning experience of playing a game?
  • What strategies can be applied to prevent frustration and boredom in player experience?
  • Dynamically adapting games to control confusion.
  • Game tests focused on engagement, confusion, frustration, and boredom.

Investigate how to integrate non-verbal cues into conversation with a conversational AI and virtual agents.

  • How can a virtual agent understand user emotions? Can non-verbal cues used to improve the interaction with virtual characters?
  • “Modelling Social Saliency: A Data-Driven Approach Using Eye-Tracking Data for Social Image Analysis”
    This project could focus on publishing and analyzing the eye-tracking data to develop models that predict social saliency (areas of focus and attention) in social images. You could compare various machine learning techniques to model this behaviour and publish the dataset as a resource for the community.
  • “GSR Event Detection for Emotion Recognition in Human-Robot Interaction: A Multimodal Approach”
    This idea centres on using Galvanic Skin Response (GSR) data to detect emotional events in the context of human-robot interaction. You can enhance the project by incorporating other modalities (e.g., eye tracking, EEG) for better emotion recognition accuracy.
  • “Deep Learning for Emotion Recognition from Face Videos: Leveraging Multimodal Cues for Enhanced Accuracy”
    In this project, you could focus on developing a deep learning-based system for emotion recognition using face videos. You can compare the performance of different models and possibly integrate eye tracking and EEG data to improve the emotion recognition process.
  • “Emotional Awareness: A Comparative Analysis of Physiological Signals and Self-Annotation in Emotion Recognition”
    This project could explore how well physiological signals (like eye tracking, EEG, and GSR) correlate with self-reported emotional states. It could involve building predictive models and performing a comparative analysis of physiological data vs. self-annotations to understand emotional awareness better.
  • “Predicting Personality Traits Using Multimodal Data: Analyzing Eye-Tracking, EEG, and Physiological Responses”
    This project could aim at predicting personality traits based on the multimodal dataset you have, leveraging eye-tracking, EEG, face video, and GSR data. It could explore the relationship between physiological responses and established personality models (the Big Five).
  • “Social Cognition and Attention Prediction: A Dataset for Modeling Human Focus in Social Imagery”
    This project could focus on releasing and analyzing the dataset related to social cognition and attention prediction in social images. The research can emphasize the practical application of this dataset in various domains, such as advertising, social media, or cognitive modelling.
  • “Multimodal Emotion Recognition from Talking Faces: Analyzing EEG, Eye Tracking, and GSR Data”
    Here, you could focus on combining various modalities, such as EEG, eye tracking, and GSR, to enhance emotion recognition in talking faces. The project could compare how different modalities contribute to recognizing subtle emotional cues in face-to-face interactions.
  • “Cross-Modal Emotion Recognition: Leveraging EEG and GSR to Improve Facial Emotion Recognition in Talking Faces”
  • This research could explore the integration of cross-modal signals, particularly EEG and GSR, with visual features from face videos to improve emotion recognition systems. The project could aim at a more holistic approach to understanding emotions in dynamic, real-world scenarios.
  • “Uncovering Social Attention Patterns: A Dataset for Predicting Focus in Social Image Perception”
    This title frames your eye-tracking data as a resource for understanding how people perceive and focus on social images. The project could include a detailed dataset analysis along with predictive modeling of attention patterns.
  • “Emotion Dynamics: Multimodal Fusion of Physiological and Behavioral Data for Real-Time Emotion Detection”
    This project could explore real-time emotion recognition by fusing data from different modalities (EEG, eye tracking, GSR, and face videos) to create a dynamic model of emotional responses in face-to-face interactions.

Test theories of neuroscience & cognitive psychology for information processing

  • Multiple timescales in perception & action
  • Hierarchical & statistical learning

Create a computational model of the uncanny valley effect using machine learning

The uncanny valley (UV) is a negative emotional reaction to artificial characters that look almost but not quite human. We have collected EEG data from 30 subjects on this topic, and we need a thorough analysis of the data using standard EEG analysis tools, to understand better how this phenomenon works.