Skip to content

Latest commit

 

History

History
222 lines (151 loc) · 8.5 KB

README.md

File metadata and controls

222 lines (151 loc) · 8.5 KB

Visual Decoding and Reconstruction via EEG Embeddings with Guided Diffusion


Framework

Framework of our proposed method.

fig-genexample

Some examples of using EEG to reconstruct stimulus images.

News:

  • [2024/09/26] Our paper is accepted to NeurIPS 2024.
  • [2024/09/25] We have updated the arxiv paper.
  • [2024/08/01] Update scripts for training and inference in different tasks.
  • [2024/05/19] Update the dataset loading scripts.
  • [2024/03/12] The arxiv paper is available.

Environment setup

Run setup.sh to quickly create a conda environment that contains the packages necessary to run our scripts; activate the environment with conda activate BCI.

. setup.sh

You can also create a new conda environment and install the required dependencies by running

conda env create -f environment.yml
conda activate BCI

pip install wandb
pip install einops

Additional environments needed to run all the code:

pip install open_clip_torch

pip install transformers==4.28.0.dev0
pip install diffusers==0.24.0

#Below are the braindecode installation commands for the most common use cases.
pip install braindecode==0.8.1

Quick training and test

If you want to quickly reproduce the results in the paper, please download the relevant preprocessed data and model weights from Hugging Face first.

1.Image Retrieval

We provide the script to learn the training strategy of EEG Encoder and verify it during training. Please modify your data set path and run:

cd Retrieval/
python ATMS_retrieval.py --logger True --gpu cuda:0  --output_dir ./outputs/contrast

We also provide the script for joint subject training, which aims to train all subjects jointly and test on a specific subject:

cd Retrieval/
python ATMS_retrieval_joint_train.py --joint_train --sub sub-01 True --logger True --gpu cuda:0  --output_dir ./outputs/contrast

Additionally, replicating the results of other methods (e.g. EEGNetV4) by run

cd Retrieval/
contrast_retrieval.py --encoder_type EEGNetv4_Encoder --epochs 30 --batch_size 1024

2.Image Reconstruction

We provide quick training and inference scripts for clip pipeline of visual reconstruction. Please modify your data set path and run zero-shot on 200 classes test dataset:

# Train and generate eeg features in Subject 8
cd Generation/
python ATMS_reconstruction.py --insubject True --subjects sub-08 --logger True \
--gpu cuda:0  --output_dir ./outputs/contrast
# Reconstruct images in Subject 8
Generation_metrics_sub8.ipynb

We also provide scripts for image reconstruction combined with the low level pipeline.

cd Generation/

# step 1: train vae encoder and then generate low level images
train_vae_latent_512_low_level_no_average.py

# step 2: load low level images and then reconstruct them
1x1024_reconstruct_sdxl.ipynb

We provide scripts for caption generation combined with the semantic level pipeline.

cd Generation/

# step 1: train feature adapter
image_adapter.ipynb

# step 2: get caption from eeg latent
GIT_caption_batch.ipynb

# step 3: load text prompt and then reconstruct images
1x1024_reconstruct_sdxl.ipynb

To evaluate the quality of the reconstructed images, modify the paths of the reconstructed images and the original stimulus images in the notebook and run:

#compute metrics, cited from MindEye
Reconstruction_Metrics_ATM.ipynb

Data availability

We provide you with the preprocessed EEG and preprocessed MEG data used in our paper at Hugging Face, as well as the raw image data.

Note that the experimental paradigms of the THINGS-EEG and THINGS-MEG datasets themselves are different, so we will provide images and data for the two datasets separately.

You can also download the relevant THINGS-EEG data set and THINGS-MEG data set at osf.io.

The raw and preprocessed EEG dataset, the training and test images are available on osf.

  • Raw EEG data: ../project_directory/eeg_dataset/raw_data/.
  • Preprocessed EEG data: ../project_directory/eeg_dataset/preprocessed_data/.
  • Training and test images: ../project_directory/image_set/.

The raw and preprocessed MEG dataset, the training and test images are available on OpenNEURO.

EEG/MEG preprocessing

Modify your path and execute the following code to perform the same preprocessing on the raw data as in our experiment:

cd EEG-preprocessing/
python EEG-preprocessing/preprocessing.py
cd MEG-preprocessing/
MEG-preprocessing/pre_possess.ipynb

Also You can get the data set used in this project through the BaiduNetDisk link to run the code.

TODO

  • [√] Release retrieval and reconstruction scripts.
  • [√] Update training scripts of reconstruction pipeline.
  • Adding validation sets improves performance evaluation accuracy.

Acknowledge

1.Thanks to Y Song et al. for their contribution in data set preprocessing and neural network structure, we refer to their work:
"Decoding Natural Images from EEG for Object Recognition".
Yonghao Song, Bingchuan Liu, Xiang Li, Nanlin Shi, Yijun Wang, and Xiaorong Gao.

2.We also thank the authors of SDRecon for providing the codes and the results. Some parts of the training script are based on MindEye and MindEye2. Thanks for the awesome research works.

3.Here we provide our THING-EEG dataset cited in the paper:
"A large and rich EEG dataset for modeling human visual object recognition".
Alessandro T. Gifford, Kshitij Dwivedi, Gemma Roig, Radoslaw M. Cichy.

4.Another used THINGS-MEG data set provides a reference:
"THINGS-data, a multimodal collection of large-scale datasets for investigating object representations in human brain and behavior.".
Hebart, Martin N., Oliver Contier, Lina Teichmann, Adam H. Rockter, Charles Y. Zheng, Alexis Kidder, Anna Corriveau, Maryam Vaziri-Pashkam, and Chris I. Baker.

Citation

@article{li2024visual,
  title={Visual Decoding and Reconstruction via EEG Embeddings with Guided Diffusion},
  author={Li, Dongyang and Wei, Chen and Li, Shiying and Zou, Jiachen and Liu, Quanying},
  journal={arXiv preprint arXiv:2403.07721},
  year={2024}
}