This page contains a few ideas for potential Master’s and Bachelor’s thesis in the lab. We make our best effort to keep this page updated; however, there might be some outdated projects on this page and some new ideas that are not present here. If you are interested in these topics, please do not hesitate to contact us, you can find our email on the contacts page.

Create a computational model of the uncanny valley effect using machine learning

The uncanny valley (UV) is a negative emotional reaction to artificial characters that look almost but not quite human. We have collected EEG data from 30 subjects on this topic, and we need a thorough analysis of the data using standard EEG analysis tools, to understand better how this phenomenon works.

Investigate the application of geometric deep learning to encode the interaction between electrodes in EEG recording

  • Stable graph inference in EEG signal
    Using graph neural networks, we can infer a latent graph structure of the signal, allowing us to use graph analysis to understand the trained model. Turning this into a reliable method of analysing EEG signals would be a huge development in the field! Several problems remain though; to begin with, the learned graph is unstable over different model initialisations, which is critical to make the method reliable. We must develop ways to converge the model on the stable graph structures given the same dataset.
  • Discretised raw EEG transformer
    Transformer models have through the powerful attention mechanism shown themselves to be strong models of sequences. They are, however, mainly optimised for discrete events (typically nodes) whereas EEG comes in the form of continuous data. Therefore, applying the transformer architecture on raw EEG is not optimal. The goal of this project is to discretise the signal using an information-theoretical technique called permutation entropy. This would possibly allow better generalisation and performance of the model, harnessing the strength of transformers.
  • Explainable EEG transformer
    Transformers have been shown to be excellent models of word sequences. Recently, we have seen them be used on EEG data with reasonable performance. With some clever engineering, we can make the models much more explainable, allowing us to understand the important features of the EEG signal. This is an incredibly important development, if neuroscientists want to use deep learning models to answer scientific questions about the brain. Can we leverage this technique to explain some cognitive phenomena?
  • Transformers for EEG graph inference
    Transformers’ attention mechanism make them mathematically equivalent to fully connected graph neural networks since attention connects every element to every other element by default. Thereby, we are actually learning a graph structure, when we are tuning the attention weights. Can we adapt the transformer models to make them understandable as graph-learning models? This would achieve a level of explainability for the researchers not seen before in transformers.
  • EEG private autoencoders
    EEG signals from different people can look very different, mainly because our brains work in different ways. Learning a good shared representation across people would help the analysis of EEG signals greatly, and we think a private encoding layer would solve this problem. The dataset and/or model to be worked on here is undecided.
  • EEG transfer learning
    A big problem in using deep learning on EEG data is the access to large amounts of training data, a small lab is simply not able to collect the massive amounts of data needed to train complex models. In other areas of deep learning (like computer vision and language models) transfer learning and fine-tuning are commonly used techniques. Is it possible to do the same with EEG? That is, can we take some large existing datasets and learn the general structure of EEG signals, and then transfer that knowledge to a smaller dataset and increase performance?
  • MoviEEG
    There is a big need in neuroscience for larger EEG datasets with well-structured inputs. However, EEG experiments can be boring and straining, making them difficult to complete for longer than an hour at a time. What if we could turn the not-so-great lab experience into a nice movie night? Then an experiment could easily last 2 hours! What we need is a couple of movies that are well-labelled, with cuts and scene descriptions and dialogue. Then we can simply play the movie and create a highly needed dataset meanwhile!

Create computational models of player experience.

  • Can we detect and qualify different players’ emotions and cognitive states while they are playing?
  • Can we adapt the gameplay? Can we generate new content driven by the experience? (generative AI in games)
  • GamEEG
    There is a big need in neuroscience for larger EEG datasets with well-structured inputs. However, EEG experiments can be boring and straining, making them difficult to complete for longer than an hour at a time. What if we could turn the not-so-great lab experience into a gaming session? Then an experiment could easily last 2 hours! We would need to develop our own games or use some open-source games, so we can record every event happening in the game. Then we can ask people to come and play some games and record their brain activity meanwhile, creating a highly needed dataset!

Investigate how to integrate non-verbal cues into conversation with a conversational AI (e.g. chatGPT).

  • How can a virtual agent understand user emotions? Can non-verbal cues used to improve the interaction with virtual characters?

Test theories of neuroscience & cognitive psychology for information processing

  • Multiple timescales in perception & action
  • Hierarchical & statistical learning