forked from Want2Vanish/FBMSNet
-
Notifications
You must be signed in to change notification settings - Fork 0
/
Instructions
107 lines (87 loc) · 5.72 KB
/
Instructions
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
Instructions to Run FBMSNet on Korea University and BCI competition IV-2a dataset.
These instructions have been tested on windows10 system. They should work for other platforms with appropriate modifications.
Step 1: Set-up a virtual environment
-----------------------------------------------------------------------------------------------------------------------
All the codes have been tested to work using a virtual environment mentioned in the env.txt file.
1. Create a new virtual environment named "env_fbmsnet" with python3.7 using Anaconda3 and activate it:
conda create --name env_fbmsnet python=3.7
conda activate env_fbmsnet
2. Install virtualenv package:
conda install -c anaconda ujson=1.35
pip install -U scikit-learn
pip install pandas -i
pip install matplotlib
pip install torchsummary
pip install resampy
pip install seaborn
pip install pygame
pip install selenium
3. Change the current working path to the path where you have stored env.txt. Install required packages.
E:
pip install -r env.txt
4. Download the offline installation files for torch and torchvision from the following link: https://download.pytorch.org/whl/torch_stable.html, Use the "Ctrl + F" command to search for the installation files of the specified version of torch and torchvision and download it. It is recommended to store them under the same path as env.txt.
torch version : torch-1.3.1-cp37-cp37m-win_amd64
torchvision version : torchvision-0.4.2-cp37-cp37m-win_amd64
5. When you have finished downloading, use the pip command to install them.
pip install torch-1.3.1-cp37-cp37m-win_amd64.whl
pip install torchvision-0.4.2-cp37-cp37m-win_amd64.whl
Congratulations, you have completed all the steps to create the virtual environment needed to run the source code!
Step 2: Get the data
-------------------------------------------------------------------------------------------------------------------------
BCI competition IV-2a:
The BCI competition IV-2a dataset is only accessible to registered users. So, you need to manually download the data.
1. Register and download the train and test data at http://www.bbci.de/competition/iv/ or http://bbci.de/competition/iv/download .
2. The data will be a zip file. Extract the zip file and copy its content to ../FBCNetToolbox/data/bci42a/originalData
3. Download the test data labels from http://bbci.de/competition/iv/results/ds2a/true_labels.zip
4. Extract the contents of the zip file and copy them to ../FBCNetToolbox/data/bci42a/originalData
5. At the end of this step you should have 18 .gdf and 18 .mat files in the folder ../FBCNetToolbox/data/bci42a/originalData
Korea University Dataset:
We recommond running the analysis for BCI competition IV-2a before analyzing the Korea Uni. dataset.
Korea university dataset is accessible by FTP protocol. So, all the motor imagery related data can be downloaded using a python function parseKoreaDataset in savedata.py
Any of the below analysis will call this function. So running any of the analysis will fetch and save the Korea data if it's not present.
Be mindful that this is a huge dataset and it can take a few hours to download.
Alternatively, if you already have this dataset with you then copy the data files to the directory ../FBCNetToolbox/data/korea/originalData
The expected file structure of the data is as follows:
originalData/session1
-/s1/EEG_MI.mat
-/s2/EEG_MI.mat
.
.
.
/session2
-/s1/EEG_MI.mat
-/s2/EEG_MI.mat
.
.
.
Step 3: Run the classification codes
------------------------------------------------------------------------------------------------------------------------------
For both of these datasets two analysis can be performed:
cv: a 10-fold cross-validation analysis using session 1 data
ho: a hold out analysis where session 1 data is used for training and session 2 data is used for testing
1. HO analysis:
HO analysis will be performed using ho.py in classify directory
So, To analyze data run the following commands:
source fbcnetEnv/bin/activate
python codes/classify/ho.py <datasetId> <networkName>
Here:
<datasetId> : The id of the dataset to analyze: (0/1)
0 : bci42a data (default)
1 : korea data
<networkName> : The Network to use for classification
'FBCNet' (default)/'deepConvNet'/'eegNet'
So, to analyze bci42a data using 'FBCNet' run the following command:
python codes/classify/ho.py 0 'FBCNet'
After the execution, all classification results will be stored in a folder ../FBCNetToolbox/output/<datasetName>/HO
2. CV analysis:
CV analysis will be performed using cv.py in classify directory
So, To analyze data run the following commands:
source fbcnetEnv/bin/activate
python codes/classify/cv.py <datasetId> <networkName>
The <networkName> and <datasetId> carry same meaning as that of HO analysis.
Please understand that CV analysis trains 10 models for each subject. So, it is very slow and can take up to 12 hr to run on the Korea dataset based on the available hardware.
After the execution, all classification results will be stored in a folder ../FBCNetToolbox/output/<datasetName>/CV
Step 4: Exploration and custom tweaking
------------------------------------------------------------------------------------------------------------------------------------
Feel free to change any classification parameters or training methods by modifying the code files in classify and centralRepo directories.
Happy exploration!