-
Notifications
You must be signed in to change notification settings - Fork 89
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Merge pull request #364 from kaeldai/fix/lgn_movies_examples
Fixing filter_movie/ example
- Loading branch information
Showing
11 changed files
with
1,168 additions
and
32 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,2 +1,58 @@ | ||
This example shows how a user can provide any movie file saved as .npy to simulate LGN responses. | ||
The input simply needs to be an (x,y,t) that describes frames over time. See the config that has an attribute under INPUTS called "data_file" which points to the desirve npy movie. | ||
# FilterNet simulations from arbitary movies | ||
|
||
One optional type of visual stimuli into the FilterNet simulation can be a movie generated by the modeler. The input file should be a .npy or .npz file made from a matrix of size `(frames, rows, columns)`. It should be grey-scaled single channel (LGN Model was not optimized for color movies). And the movies arrays can either be a floating-point type-matrix with contrast values normalized between [-1.0, +1.0], or an integer (uint8) with values between [0, 255]. | ||
|
||
|
||
In this network example we will show multiple ways of creating and playing custom movie files. Two examples are movies generated from the Allen Brain Observatory experiments and one just a grey-screen movie. | ||
|
||
|
||
## Generating movie files | ||
|
||
Because the movie .npy files can get pretty big, you must first generate these files using the `create_movie.py` script. | ||
|
||
### Natural Scenes | ||
|
||
As part of the Brain Observatory experiments mice we shown a sequence of 118 different static images in a randomized order. To recreate this experiment for FilterNet we will uses the following command to generate a movie from 20 of the different images. | ||
|
||
```bash | ||
$ python create_movie.py --n-images=20 --images-per-sec=10 --greyscreen-pre=500.0 natural-scenes | ||
``` | ||
|
||
with the following options | ||
* **--n-images=20** - Uses only 20 of the 118 images in the Brain Observatory data set. | ||
* **--images-per-sec=10** - Show 10 images every second (eg. the screen changes every 100 ms). | ||
* **--greyscreen-pre=500.0** - Adds a grey-screen fro 500 ms at the beginning of the movie. | ||
|
||
The .npy file will be saved to the *movies/* folder. To run FilterNet to generate retinal-thalamic spike trains generated from this movie you can use the `config.simulation_naturalscenes.json` config file: | ||
|
||
```bash | ||
$ python run_filternet.py config.simulation_naturalscenes.json | ||
``` | ||
|
||
Resulting spike-trains and rates file will be saved to the *output_naturalscenes/* folder as specified in the config. | ||
|
||
### Natural Movies (eg. Touch of Evil) | ||
|
||
For more naturalistic style movies, the Brain Observatory experiments included showing clips from the Orson Wells films "Touch of Evil", with two clips at 30 seconds and one clip at 120 seconds. To use this input with FilterNet we need to not only convert the movie files to .npy. But we must also resize to fit our model's field-size (120x240) as-well-as upscale the film to 1000 fps. We can do this with the following command: | ||
|
||
```bash | ||
$ python create_movie.py touch-of-evil | ||
``` | ||
|
||
The resulting npy file will be saved to the *movies/* folder. As you can run | ||
|
||
```bash | ||
$ python run_filternet.py config.simulation_touchofevil.json | ||
``` | ||
|
||
To run FilterNet against 3 seconds of the 30 second clip and save the output to the *output_naturalmovie/* folder. | ||
|
||
### Grey Screen | ||
|
||
The `config.simulation_greyscreen.json` will run against of movie consisting of nothing but a static grey screen, which can be a useful check of our model. To generate the appropiate movie npy file run the following command | ||
|
||
```bash | ||
$ python create_movie.py greyscreen | ||
``` | ||
|
||
|
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,22 +1,114 @@ | ||
import os | ||
import pickle | ||
import numpy as np | ||
|
||
from bmtk.builder import NetworkBuilder | ||
|
||
|
||
field_size = (304, 608) # size of movie screen (pixels) | ||
cell_grid = (5, 5) # place cells in a grid layout of NxM | ||
xs, ys = np.meshgrid(np.linspace(0, field_size[0], num=cell_grid[0]), np.linspace(0, field_size[1], num=cell_grid[1])) | ||
|
||
lgn_net = NetworkBuilder('lgn') | ||
lgn_net.add_nodes( | ||
N=cell_grid[0]*cell_grid[1], | ||
ei='e', | ||
model_type='virtual', | ||
model_template='lgnmodel:LGNOnOFFCell', | ||
dynamics_params='lgn_on_off_model.json', | ||
sigma_on=(2.0, 2.0), | ||
sigma_off=(4.0, 4.0), | ||
x=xs.flatten(), | ||
y=ys.flatten() | ||
) | ||
|
||
lgn_net.save_nodes(output_dir='network') | ||
X_grids = 2 # 15 | ||
Y_grids = 2 # 10 | ||
X_len = 240.0 # In linear degrees | ||
Y_len = 120.0 # In linear degrees | ||
|
||
|
||
def generate_positions_grids(N, X_grids, Y_grids, X_len, Y_len): | ||
width_per_tile = X_len/X_grids | ||
height_per_tile = Y_len/Y_grids | ||
|
||
X = np.zeros(N * X_grids * Y_grids) | ||
Y = np.zeros(N * X_grids * Y_grids) | ||
|
||
counter = 0 | ||
for i in range(X_grids): | ||
for j in range(Y_grids): | ||
X_tile = np.random.uniform(i*width_per_tile, (i+1) * width_per_tile, N) | ||
Y_tile = np.random.uniform(j*height_per_tile, (j+1) * height_per_tile, N) | ||
X[counter*N:(counter+1)*N] = X_tile | ||
Y[counter*N:(counter+1)*N] = Y_tile | ||
counter += 1 | ||
return np.column_stack((X, Y)) | ||
|
||
|
||
def get_filter_spatial_size(N, X_grids, Y_grids, size_range): | ||
spatial_sizes = np.zeros(N * X_grids * Y_grids) | ||
counter = 0 | ||
for i in range(X_grids): | ||
for j in range(Y_grids): | ||
if len(size_range) == 1: | ||
sizes = np.ones(N) * size_range[0] | ||
else: | ||
sizes = np.random.triangular(size_range[0], size_range[0] + 1, size_range[1], N) | ||
spatial_sizes[counter * N:(counter + 1) * N] = sizes | ||
counter += 1 | ||
|
||
return spatial_sizes | ||
|
||
|
||
lgn_models = [ | ||
{ | ||
'N': 8, | ||
'ei': 'e', | ||
'model_type': 'virtual', | ||
'model_template': 'lgnmodel:tOFF_TF15', | ||
'size_range': [2, 10], | ||
'dynamics_params': 'tOFF_TF15_3.44215357_-2.11509939_8.27421573_20.0_0.0_ic.json' | ||
}, | ||
{ | ||
'N': 8, | ||
'ei': 'e', | ||
'model_type': 'virtual', | ||
'model_template': 'lgnmodel:sONsOFF_001', | ||
'size_range': [6], | ||
'dynamics_params': 'sOFF_TF4_3.5_-2.0_10.0_60.0_15.0_ic.json', | ||
'non_dom_params': 'sON_TF4_3.5_-2.0_30.0_60.0_25.0_ic.json', | ||
'sf_sep': 6.0 | ||
}, | ||
{ | ||
'N': 5, | ||
'ei': 'e', | ||
'model_type': 'virtual', | ||
'model_template': 'lgnmodel:sONtOFF_001', | ||
'size_range': [9], | ||
'dynamics_params': 'tOFF_TF8_4.222_-2.404_8.545_23.019_0.0_ic.json', | ||
'non_dom_params': 'sON_TF4_3.5_-2.0_30.0_60.0_25.0_ic.json', | ||
'sf_sep': 4.0 | ||
} | ||
] | ||
|
||
lgn = NetworkBuilder('lgn') | ||
for params in lgn_models: | ||
# Get position of lgn cells and keep track of the averaged location | ||
# For now, use randomly generated values | ||
total_N = params['N'] * X_grids * Y_grids | ||
|
||
# Get positional coordinates of cells | ||
positions = generate_positions_grids(params['N'], X_grids, Y_grids, X_len, Y_len) | ||
|
||
# Get spatial filter size of cells | ||
filter_sizes = get_filter_spatial_size(params['N'], X_grids, Y_grids, params['size_range']) | ||
|
||
lgn.add_nodes( | ||
N=total_N, | ||
ei=params['ei'], | ||
model_type=params['model_type'], | ||
model_template=params['model_template'], | ||
x=positions[:, 0], | ||
y=positions[:, 1], | ||
dynamics_params=params['dynamics_params'], | ||
|
||
# TODO: Come up with better name than non-dominate parameters (spatial-params?) | ||
non_dom_params=params.get('non_dom_params', None), | ||
|
||
# TODO: See if it's possible to calculate spatial sizes during simulation. | ||
spatial_size=filter_sizes, | ||
|
||
# NOTE: If tuning angle is not defined, then it will be randomly generated during the simulation. But | ||
# when evaluating a large network many times it will be more efficent to store it in the nodes file. | ||
tuning_angle=np.random.uniform(0.0, 360.0, total_N), | ||
|
||
# TODO: Can sf-sperator be stored in the params json file. | ||
sf_sep=params.get('sf_sep', None) | ||
) | ||
|
||
lgn.build() | ||
lgn.save(output_dir='network') |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,39 @@ | ||
{ | ||
"manifest": { | ||
"$BASE_DIR": ".", | ||
"$OUTPUT_DIR": "$BASE_DIR/output_greyscreen", | ||
"$INPUT_DIR": "$BASE_DIR/movies" | ||
}, | ||
|
||
"run": { | ||
"tstop": 2000.0, | ||
"dt": 0.1 | ||
}, | ||
|
||
"target_simulator": "LGNModel", | ||
|
||
"conditions": { | ||
"jitter_lower": 1.0, | ||
"jitter_upper": 1.0 | ||
}, | ||
|
||
"inputs": { | ||
"movie_input": { | ||
"input_type": "movie", | ||
"module": "movie", | ||
"data_file": "$INPUT_DIR/grey_screen.2000ms.1000fps.normalized.npy", | ||
"frame_rate": 1000.0 | ||
} | ||
}, | ||
|
||
"output": { | ||
"output_dir": "$OUTPUT_DIR", | ||
"log_file": "log.txt", | ||
"rates_csv": "rates.csv", | ||
"spikes_csv": "spikes.csv", | ||
"spikes_h5": "spikes.h5", | ||
"overwrite_output_dir": true | ||
}, | ||
|
||
"network": "config.circuit.json" | ||
} |
40 changes: 40 additions & 0 deletions
40
examples/filter_movie/config.simulation_naturalscenes.json
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,40 @@ | ||
{ | ||
"manifest": { | ||
"$BASE_DIR": ".", | ||
"$OUTPUT_DIR": "$BASE_DIR/output_natural_scenes", | ||
"$INPUT_DIR": "$BASE_DIR/movies" | ||
}, | ||
|
||
"run": { | ||
"tstop": 2500.0, | ||
"dt": 0.1 | ||
}, | ||
|
||
"target_simulator": "LGNModel", | ||
|
||
"conditions": { | ||
"jitter_lower": 1.0, | ||
"jitter_upper": 1.0 | ||
}, | ||
|
||
"inputs": { | ||
"movie_input": { | ||
"input_type": "movie", | ||
"module": "movie", | ||
"data_file": "$INPUT_DIR/ns_20images.set00.2500ms.1000fps.10ips.npy", | ||
"frame_rate": 1000.0, | ||
"normalize": true | ||
} | ||
}, | ||
|
||
"output": { | ||
"output_dir": "$OUTPUT_DIR", | ||
"log_file": "log.txt", | ||
"rates_csv": "rates.csv", | ||
"spikes_csv": "spikes.csv", | ||
"spikes_h5": "spikes.h5", | ||
"overwrite_output_dir": true | ||
}, | ||
|
||
"network": "config.circuit.json" | ||
} |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,44 @@ | ||
{ | ||
"manifest": { | ||
"$BASE_DIR": ".", | ||
"$OUTPUT_DIR": "$BASE_DIR/output_natural_movies_one", | ||
"$INPUT_DIR": "$BASE_DIR/movies" | ||
}, | ||
|
||
"run": { | ||
"tstop": 3000.0, | ||
"dt": 0.1 | ||
}, | ||
|
||
"target_simulator": "LGNModel", | ||
|
||
"conditions": { | ||
"jitter_lower": 1.0, | ||
"jitter_upper": 1.0 | ||
}, | ||
|
||
"inputs": { | ||
"movie_input": { | ||
"input_type": "movie", | ||
"module": "movie", | ||
"data_file": "$INPUT_DIR/natural_movie_one.29700ms.120x240.npy", | ||
"frame_rate": 1000.0, | ||
"normalize": true, | ||
"evaluation_options": { | ||
"t_min": 0.0, | ||
"t_max": 3.0 | ||
} | ||
} | ||
}, | ||
|
||
"output": { | ||
"output_dir": "$OUTPUT_DIR", | ||
"log_file": "log.txt", | ||
"rates_csv": "rates.csv", | ||
"spikes_csv": "spikes.csv", | ||
"spikes_h5": "spikes.h5", | ||
"overwrite_output_dir": true | ||
}, | ||
|
||
"network": "config.circuit.json" | ||
} |
Oops, something went wrong.