Skip to content

Commit

Permalink
Merge pull request #364 from kaeldai/fix/lgn_movies_examples
Browse files Browse the repository at this point in the history
Fixing filter_movie/ example
  • Loading branch information
kaeldai authored Apr 30, 2024
2 parents afd8fbb + 9bab3c1 commit 704e4e4
Show file tree
Hide file tree
Showing 11 changed files with 1,168 additions and 32 deletions.
3 changes: 3 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -29,6 +29,9 @@ docs/tutorial/sim_ch*
docs/tutorial/sim_dyn_syn
examples/**/output*/**
examples/bio_simulated_annealing/updated_weights/*
examples/filter_movie/boc/*
examples/filter_movie/bob_images/*
examples/filter_movie/movies/*
/.eggs
benchmarks/.asv
**/x86_64/**
Expand Down
6 changes: 1 addition & 5 deletions bmtk/simulator/filternet/default_setters/cell_loaders.py
Original file line number Diff line number Diff line change
Expand Up @@ -203,11 +203,7 @@ def default_cell_loader(node, template_name, dynamics_params):
cell = sep_ts_onoff_cell

elif model_name == 'LGNOnOFFCell':
wts = [node['weight_dom_0'], node['weight_dom_1']]
kpeaks = [node['kpeaks_dom_0'], node['kpeaks_dom_1']]
delays = [node['delay_dom_0'], node['delay_dom_1']]
# transfer_function = ScalarTransferFunction('s')
temporal_filter = TemporalFilterCosineBump(wts, kpeaks, delays)
temporal_filter = TemporalFilterCosineBump(t_weights, t_kpeaks, t_delays)

spatial_filter_on = GaussianSpatialFilter(sigma=node['sigma_on'], origin=origin, translate=translate)
on_linear_filter = SpatioTemporalFilter(spatial_filter_on, temporal_filter, amplitude=20)
Expand Down
60 changes: 58 additions & 2 deletions examples/filter_movie/README.md
Original file line number Diff line number Diff line change
@@ -1,2 +1,58 @@
This example shows how a user can provide any movie file saved as .npy to simulate LGN responses.
The input simply needs to be an (x,y,t) that describes frames over time. See the config that has an attribute under INPUTS called "data_file" which points to the desirve npy movie.
# FilterNet simulations from arbitary movies

One optional type of visual stimuli into the FilterNet simulation can be a movie generated by the modeler. The input file should be a .npy or .npz file made from a matrix of size `(frames, rows, columns)`. It should be grey-scaled single channel (LGN Model was not optimized for color movies). And the movies arrays can either be a floating-point type-matrix with contrast values normalized between [-1.0, +1.0], or an integer (uint8) with values between [0, 255].


In this network example we will show multiple ways of creating and playing custom movie files. Two examples are movies generated from the Allen Brain Observatory experiments and one just a grey-screen movie.


## Generating movie files

Because the movie .npy files can get pretty big, you must first generate these files using the `create_movie.py` script.

### Natural Scenes

As part of the Brain Observatory experiments mice we shown a sequence of 118 different static images in a randomized order. To recreate this experiment for FilterNet we will uses the following command to generate a movie from 20 of the different images.

```bash
$ python create_movie.py --n-images=20 --images-per-sec=10 --greyscreen-pre=500.0 natural-scenes
```

with the following options
* **--n-images=20** - Uses only 20 of the 118 images in the Brain Observatory data set.
* **--images-per-sec=10** - Show 10 images every second (eg. the screen changes every 100 ms).
* **--greyscreen-pre=500.0** - Adds a grey-screen fro 500 ms at the beginning of the movie.

The .npy file will be saved to the *movies/* folder. To run FilterNet to generate retinal-thalamic spike trains generated from this movie you can use the `config.simulation_naturalscenes.json` config file:

```bash
$ python run_filternet.py config.simulation_naturalscenes.json
```

Resulting spike-trains and rates file will be saved to the *output_naturalscenes/* folder as specified in the config.

### Natural Movies (eg. Touch of Evil)

For more naturalistic style movies, the Brain Observatory experiments included showing clips from the Orson Wells films "Touch of Evil", with two clips at 30 seconds and one clip at 120 seconds. To use this input with FilterNet we need to not only convert the movie files to .npy. But we must also resize to fit our model's field-size (120x240) as-well-as upscale the film to 1000 fps. We can do this with the following command:

```bash
$ python create_movie.py touch-of-evil
```

The resulting npy file will be saved to the *movies/* folder. As you can run

```bash
$ python run_filternet.py config.simulation_touchofevil.json
```

To run FilterNet against 3 seconds of the 30 second clip and save the output to the *output_naturalmovie/* folder.

### Grey Screen

The `config.simulation_greyscreen.json` will run against of movie consisting of nothing but a static grey screen, which can be a useful check of our model. To generate the appropiate movie npy file run the following command

```bash
$ python create_movie.py greyscreen
```


128 changes: 110 additions & 18 deletions examples/filter_movie/build_network.py
Original file line number Diff line number Diff line change
@@ -1,22 +1,114 @@
import os
import pickle
import numpy as np

from bmtk.builder import NetworkBuilder


field_size = (304, 608) # size of movie screen (pixels)
cell_grid = (5, 5) # place cells in a grid layout of NxM
xs, ys = np.meshgrid(np.linspace(0, field_size[0], num=cell_grid[0]), np.linspace(0, field_size[1], num=cell_grid[1]))

lgn_net = NetworkBuilder('lgn')
lgn_net.add_nodes(
N=cell_grid[0]*cell_grid[1],
ei='e',
model_type='virtual',
model_template='lgnmodel:LGNOnOFFCell',
dynamics_params='lgn_on_off_model.json',
sigma_on=(2.0, 2.0),
sigma_off=(4.0, 4.0),
x=xs.flatten(),
y=ys.flatten()
)

lgn_net.save_nodes(output_dir='network')
X_grids = 2 # 15
Y_grids = 2 # 10
X_len = 240.0 # In linear degrees
Y_len = 120.0 # In linear degrees


def generate_positions_grids(N, X_grids, Y_grids, X_len, Y_len):
width_per_tile = X_len/X_grids
height_per_tile = Y_len/Y_grids

X = np.zeros(N * X_grids * Y_grids)
Y = np.zeros(N * X_grids * Y_grids)

counter = 0
for i in range(X_grids):
for j in range(Y_grids):
X_tile = np.random.uniform(i*width_per_tile, (i+1) * width_per_tile, N)
Y_tile = np.random.uniform(j*height_per_tile, (j+1) * height_per_tile, N)
X[counter*N:(counter+1)*N] = X_tile
Y[counter*N:(counter+1)*N] = Y_tile
counter += 1
return np.column_stack((X, Y))


def get_filter_spatial_size(N, X_grids, Y_grids, size_range):
spatial_sizes = np.zeros(N * X_grids * Y_grids)
counter = 0
for i in range(X_grids):
for j in range(Y_grids):
if len(size_range) == 1:
sizes = np.ones(N) * size_range[0]
else:
sizes = np.random.triangular(size_range[0], size_range[0] + 1, size_range[1], N)
spatial_sizes[counter * N:(counter + 1) * N] = sizes
counter += 1

return spatial_sizes


lgn_models = [
{
'N': 8,
'ei': 'e',
'model_type': 'virtual',
'model_template': 'lgnmodel:tOFF_TF15',
'size_range': [2, 10],
'dynamics_params': 'tOFF_TF15_3.44215357_-2.11509939_8.27421573_20.0_0.0_ic.json'
},
{
'N': 8,
'ei': 'e',
'model_type': 'virtual',
'model_template': 'lgnmodel:sONsOFF_001',
'size_range': [6],
'dynamics_params': 'sOFF_TF4_3.5_-2.0_10.0_60.0_15.0_ic.json',
'non_dom_params': 'sON_TF4_3.5_-2.0_30.0_60.0_25.0_ic.json',
'sf_sep': 6.0
},
{
'N': 5,
'ei': 'e',
'model_type': 'virtual',
'model_template': 'lgnmodel:sONtOFF_001',
'size_range': [9],
'dynamics_params': 'tOFF_TF8_4.222_-2.404_8.545_23.019_0.0_ic.json',
'non_dom_params': 'sON_TF4_3.5_-2.0_30.0_60.0_25.0_ic.json',
'sf_sep': 4.0
}
]

lgn = NetworkBuilder('lgn')
for params in lgn_models:
# Get position of lgn cells and keep track of the averaged location
# For now, use randomly generated values
total_N = params['N'] * X_grids * Y_grids

# Get positional coordinates of cells
positions = generate_positions_grids(params['N'], X_grids, Y_grids, X_len, Y_len)

# Get spatial filter size of cells
filter_sizes = get_filter_spatial_size(params['N'], X_grids, Y_grids, params['size_range'])

lgn.add_nodes(
N=total_N,
ei=params['ei'],
model_type=params['model_type'],
model_template=params['model_template'],
x=positions[:, 0],
y=positions[:, 1],
dynamics_params=params['dynamics_params'],

# TODO: Come up with better name than non-dominate parameters (spatial-params?)
non_dom_params=params.get('non_dom_params', None),

# TODO: See if it's possible to calculate spatial sizes during simulation.
spatial_size=filter_sizes,

# NOTE: If tuning angle is not defined, then it will be randomly generated during the simulation. But
# when evaluating a large network many times it will be more efficent to store it in the nodes file.
tuning_angle=np.random.uniform(0.0, 360.0, total_N),

# TODO: Can sf-sperator be stored in the params json file.
sf_sep=params.get('sf_sep', None)
)

lgn.build()
lgn.save(output_dir='network')
Original file line number Diff line number Diff line change
Expand Up @@ -22,11 +22,7 @@
"input_type": "movie",
"module": "movie",
"data_file": "$INPUT_DIR/test_scene.npy",
"frame_rate": 1000.0,
"evaluation_options": {
"t_min": 3.0,
"t_max": 4.0
}
"frame_rate": 1000.0
}
},

Expand Down
39 changes: 39 additions & 0 deletions examples/filter_movie/config.simulation_greyscreen.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,39 @@
{
"manifest": {
"$BASE_DIR": ".",
"$OUTPUT_DIR": "$BASE_DIR/output_greyscreen",
"$INPUT_DIR": "$BASE_DIR/movies"
},

"run": {
"tstop": 2000.0,
"dt": 0.1
},

"target_simulator": "LGNModel",

"conditions": {
"jitter_lower": 1.0,
"jitter_upper": 1.0
},

"inputs": {
"movie_input": {
"input_type": "movie",
"module": "movie",
"data_file": "$INPUT_DIR/grey_screen.2000ms.1000fps.normalized.npy",
"frame_rate": 1000.0
}
},

"output": {
"output_dir": "$OUTPUT_DIR",
"log_file": "log.txt",
"rates_csv": "rates.csv",
"spikes_csv": "spikes.csv",
"spikes_h5": "spikes.h5",
"overwrite_output_dir": true
},

"network": "config.circuit.json"
}
40 changes: 40 additions & 0 deletions examples/filter_movie/config.simulation_naturalscenes.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,40 @@
{
"manifest": {
"$BASE_DIR": ".",
"$OUTPUT_DIR": "$BASE_DIR/output_natural_scenes",
"$INPUT_DIR": "$BASE_DIR/movies"
},

"run": {
"tstop": 2500.0,
"dt": 0.1
},

"target_simulator": "LGNModel",

"conditions": {
"jitter_lower": 1.0,
"jitter_upper": 1.0
},

"inputs": {
"movie_input": {
"input_type": "movie",
"module": "movie",
"data_file": "$INPUT_DIR/ns_20images.set00.2500ms.1000fps.10ips.npy",
"frame_rate": 1000.0,
"normalize": true
}
},

"output": {
"output_dir": "$OUTPUT_DIR",
"log_file": "log.txt",
"rates_csv": "rates.csv",
"spikes_csv": "spikes.csv",
"spikes_h5": "spikes.h5",
"overwrite_output_dir": true
},

"network": "config.circuit.json"
}
44 changes: 44 additions & 0 deletions examples/filter_movie/config.simulation_touchofevil.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,44 @@
{
"manifest": {
"$BASE_DIR": ".",
"$OUTPUT_DIR": "$BASE_DIR/output_natural_movies_one",
"$INPUT_DIR": "$BASE_DIR/movies"
},

"run": {
"tstop": 3000.0,
"dt": 0.1
},

"target_simulator": "LGNModel",

"conditions": {
"jitter_lower": 1.0,
"jitter_upper": 1.0
},

"inputs": {
"movie_input": {
"input_type": "movie",
"module": "movie",
"data_file": "$INPUT_DIR/natural_movie_one.29700ms.120x240.npy",
"frame_rate": 1000.0,
"normalize": true,
"evaluation_options": {
"t_min": 0.0,
"t_max": 3.0
}
}
},

"output": {
"output_dir": "$OUTPUT_DIR",
"log_file": "log.txt",
"rates_csv": "rates.csv",
"spikes_csv": "spikes.csv",
"spikes_h5": "spikes.h5",
"overwrite_output_dir": true
},

"network": "config.circuit.json"
}
Loading

0 comments on commit 704e4e4

Please sign in to comment.