diff --git a/.gitattributes b/.gitattributes new file mode 100644 index 0000000..cda4151 --- /dev/null +++ b/.gitattributes @@ -0,0 +1,2 @@ +simulation-code/build_scripts*/* linguist-vendored +notebooks/lehr_luboeinski_tetzlaff_2022/code/* linguist-vendored diff --git a/.gitignore b/.gitignore index 79e53e6..9513f4d 100644 --- a/.gitignore +++ b/.gitignore @@ -1,2 +1,9 @@ *.out *pycache*/ +*_TRIPLET*/ +*_F*/ +*_MINCONV*/ +*_MIN2N1S*/ +plasticity_type.txt +*Const.*/ +simulation-bin/run_binary_misc/connections.txt diff --git a/BIBTEX.md b/BIBTEX.md index a83aa5e..2e3b32d 100644 --- a/BIBTEX.md +++ b/BIBTEX.md @@ -1,6 +1,6 @@ -## BibTeX code for publications +## BibTeX code for citing simulation code and related publications - @article{LuboeinskiTetzlaff2021a, + @article{LuboeinskiTetzlaff2021, title={Memory consolidation and improvement by synaptic tagging and capture in recurrent neural networks}, author={Luboeinski, Jannik and Tetzlaff, Christian}, journal={Communications Biology}, @@ -8,16 +8,32 @@ number={275}, year={2021}, publisher={Nature Publishing Group}, - doi={10.1038/s42003-021-01778-y} + doi={10.1038/s42003-021-01778-y}, + url={https://doi.org/10.1038/s42003-021-01778-y} } - @article{LuboeinskiTetzlaff2021b, + @article{LuboeinskiTetzlaff2022, title={Organization and priming of long-term memory representations with two-phase plasticity}, author={Luboeinski, Jannik and Tetzlaff, Christian}, - journal={bioRxiv preprint}, - year={2021}, - publisher={Cold Spring Harbor Laboratory}, - doi={10.1101/2021.04.15.439982} + journal={Cognitive Computation}, + volume={}, + number={}, + year={2022}, + publisher={Springer}, + doi={10.1007/s12559-022-10021-7}, + url={https://doi.org/10.1007/s12559-022-10021-7} + } + + @article{Lehr2022, + title={Neuromodulator-dependent synaptic tagging and capture retroactively controls neural coding in spiking neural networks}, + author={Lehr, Andrew B. and Luboeinski, Jannik and Tetzlaff, Christian}, + journal={Scientific Reports (under review)}, + volume={}, + number={}, + year={2022}, + publisher={}, + doi={}, + url={} } @phdthesis{Luboeinski2021thesis, @@ -26,5 +42,14 @@ year={2021}, school={University of G\"{o}ttingen}, type={Dissertation}, - url={http://hdl.handle.net/21.11130/00-1735-0000-0008-58f8-e} + doi={10.53846/goediss-463}, + url={https://doi.org/10.53846/goediss-463} + } + + @Misc{LuboeinskiLehr2022code, + title={Simulation code and analysis scripts for recurrent spiking neural networks with memory consolidation based on synaptic tagging and capture}, + author={Luboeinski, Jannik and Lehr, Andrew B.}, + year={2022}, + doi={10.5281/zenodo.4429195}, + url={https://doi.org/10.5281/zenodo.4429195} } diff --git a/README.md b/README.md index eac822f..980c476 100644 --- a/README.md +++ b/README.md @@ -8,22 +8,27 @@ The generic C++ code and build scripts for specific simulations are located in t The directory __analysis/__ contains Python scripts serving to analyze the data produced by the simulations. -The package that is provided here has been developed and used for the following studies: +The package that is provided here has been developed and used for the following publications: 1. Luboeinski, J., Tetzlaff, C. Memory consolidation and improvement by synaptic tagging and capture in recurrent neural networks. Commun. Biol. 4, 275 (2021). https://doi.org/10.1038/s42003-021-01778-y --- _original model underlying the simulation code, investigation of synaptic consolidation and improvement of a single memory representation_ -2. Luboeinski, J., Tetzlaff, C. Organization and priming of long-term memory representations with two-phase plasticity. bioRxiv (2021). https://doi.org/10.1101/2021.04.15.439982 --- +2. Luboeinski, J., Tetzlaff, C. Organization and priming of long-term memory representations with two-phase plasticity. Cogn. Comput. (2022). https://doi.org/10.1007/s12559-022-10021-7 --- _extension of the model, investigation of functional consequences of interactions between multiple memory representations in different paradigms_ -3. Luboeinski, J. The Role of Synaptic Tagging and Capture for Memory Dynamics in Spiking Neural Networks \[Dissertation\]. University of Göttingen (2021). https://doi.org/10.53846/goediss-463 --- +3. Lehr, A.B., Luboeinski, J., Tetzlaff, C. Neuromodulator-dependent synaptic tagging and capture retroactively controls neural coding in spiking neural networks. Sci. Rep., under review (2022). --- + _extension of the model, investigation of the effects of neuromodulator-dependent protein synthesis on memory consolidation_ + +4. Luboeinski, J. The Role of Synaptic Tagging and Capture for Memory Dynamics in Spiking Neural Networks \[Dissertation\]. University of Göttingen (2021). https://doi.org/10.53846/goediss-463 --- _extension of the model, further investigation of multiple memory representations with attractor dynamics as well as characterization of plasticity regimes depending on pre- and post-synaptic firing rate_ -If you use parts of the package for your research, please cite accordingly (BibTeX code can be found [here](BIBTEX.md)). Also note that the code provided here contains some features that have not been used in any publication yet. Please feel free to contact us for any questions. +Please cite accordingly if you use parts of the software package or the model for your research (BibTeX code is found [here](BIBTEX.md)). Note that the simulation code provided here contains some features that have not been used in any publication yet. Please feel free to contact us for any questions. ## Simulation code +The code for running the main simulation is located in the directory __simulation-code/__. It comes with build scripts for specific cases. + ### Files * __NetworkMain.cpp__ - contains the main function initializing network simulations @@ -42,30 +47,37 @@ If you use parts of the package for your research, please cite accordingly (BibT The simulation code comes with shell scripts (in addition to the included Makefile) to build it for different purposes, related to the studies mentioned above. The subdirectories of __simulation-code/__ contain the following scripts: -* __build_scripts_paper1__: +* __build_scripts_paper1/__: * __compile_2N1S__ - compiles the code for simulating the effect of basic plasticity induction protocols at a single synapse * __compile_IRS__ - compiles the code for network simulations of learning, consolidation, recall, and the effect of intermediate stimulation on a memory representation * __compile_sizes__ - compiles the code for network simulations of learning, consolidation, and recall of a memory representation of certain size -* __build_scripts_paper2__: +* __build_scripts_paper2/__: * __compile_activation__ - compiles the code for network simulations to investigate the spontaneous activation of assemblies in the absence of plasticity and in the presence of background noise * __compile_organization__ - compiles the code for network simulations of learning and consolidating three memory representations in different organizational paradigms + * __compile_organization_IC__ - compiles the code for network simulations of learning and consolidating three memory representations in different organizational paradigms, with intermediate consolidation * __compile_organization_noLTD__ - compiles the code for network simulations of learning and consolidating three memory representations in different organizational paradigms, without LTD * __compile_organization_randweight__ - compiles the code for network simulations of learning and consolidating three memory representations in different organizational paradigms, with randomly initialized weights - * __compile_priming_and_activation__ - compiles the code for network simulations of learning and consolidating three memory representations in different organizational paradigms, priming one assembly, and investigating the spontaneous activation of assemblies in the absence of plasticity and in the presence of background noise * __compile_recall__ - compiles the code for network simulations to investigate in the absence of plasticity the recall of different assemblies + + +* __build_scripts_paper3/__: + + * __compile_nm_psth_c__ - compiles the code for network simulations of learning, neuromodulator-dependent consolidation, and recall of a memory representation of 150 neurons - -* __build_scripts_misc__: +* __build_scripts_misc/__: + * __compile__ - compiles the code as it is (without setting any specific preprocessor definition for a particular simulation) + * __compile_2N1S_conv__ - compiles the code for testing the convergence of the membrane potential * __compile_2N1S_Li2016__ - compiles the code for simulating the effect of basic plasticity induction protocols at a single synapse, with the same model as in Li, Kulvicius, Tetzlaff, PLOS ONE, 2016 * __compile_2N1S_minimal__ - compiles the code for a minimal example of the induction of early-phase plasticity by a few pre-defined spikes - * __compile_activation__ - compiles the code for network simulations to investigate the spontaneous activation of assemblies in the absence of plasticity and in the presence of background noise - * __compile_activation_osc_1Hz__ - compiles the code for network simulations to investigate the spontaneous activation of assemblies in the absence of plasticity and in the presence of 1 Hz oscillatory input to the inhibitory population - * __compile_activation_osc_5Hz__ - compiles the code for network simulations to investigate the spontaneous activation of assemblies in the absence of plasticity and in the presence of 5 Hz oscillatory input to the inhibitory population + * __compile_activation_attractors__ - compiles the code for network simulations to investigate the spontaneous activation of assemblies in the absence of plasticity and in the presence of background noise or in the presence of 1 Hz/5 Hz oscillatory input to the inhibitory population + * __compile_CA150__ - compiles the code for network simulations of learning, consolidation, and recall of a memory representation of 150 neurons + * __compile_max_activity__ - compiles the code for a network that has one neuron spiking at maximal activity + * __compile_onespike__ - compiles the code for a network that is stimulated with a single pulse to evoke one spike in one neuron * __compile_organization_attractors__ - compiles the code for network simulations of learning and consolidating three attractor memory representations in different organizational paradigms * __compile_PFreq__ - compiles the code for network simulations to characterize plasticity regimes depending on pre- and post-synaptic firing rate @@ -75,40 +87,50 @@ runtime. The binaries and run scripts for the studies mentioned above are located in subdirectories of __simulation-bin/__. Please note that some of these scripts trigger a cascade of many simulations by using the `screen` command. This may cause less powerful machines to take very long or to run into memory issues. In those cases, you might consider to run simulations separately. -* __run_binary_paper1__: +* __run_binary_paper1/__: * __run_2N1S__ - reproduce single-synapse data resulting from basic induction protocols for synaptic plasticity (see, for example, Sajikumar et al., J Neurosci, 2005) * __run_full__ - learn a memory representation, let it consolidate, and recall after 8 hours (no fast-forwarding, takes very long) * __run_IRS__ - learn a memory representation, save the network state, and recall after 10 seconds; load the network state, apply intermediate stimulation, let the memory representation consolidate, and recall after 8 hours - * __run_sizes__ - learn a memory representation, save the network state, and recall after 10 seconds; load the network state, let the memory representation consolidate, and recall after 8 hours - * __connections.txt__ - the default connectivity matrix used in this study; if this file is absent, the simulation program will automatically generate a new network structure + * __run_varied_inhibition__ - learn a memory representation, save the network state, and recall after 10 seconds; load the network state, let the memory representation consolidate, and recall after 8 hours; do this for varied inhibition parameters + * __run_varied_size__ - learn a memory representation, save the network state, and recall after 10 seconds; load the network state, let the memory representation consolidate, and recall after 8 hours; do this for varied cell assembly size + * __connections.txt__ - the default connectivity matrix used for this paper; if this file is absent, the simulation program will automatically generate a new network structure - -* __run_binary_paper2__: +* __run_binary_paper2/__: * __run_activation*__ - simulate the activity in a previously consolidated network for 3 minutes without plasticity (it is required to run the according __run_learn_cons*__ script beforehand) * __run_learn_cons__ - subsequently learn 3 memory representations and let them consolidate for 8 hours - * __run_learn_cons_interleaved__ - learn 3 memory representations in an interleaved manner and let them consolidate for 8 hours - * __run_learn_cons_IC__ - subsequently learn 3 memory representations; consolidate for 8 hours after learning each assembly + * __run_learn_cons_interleaved__ - learn 3 memory representations in an interleaved manner and let them consolidate for 8 hours (it is required to run __run_learn_cons__ beforehand) + * __run_learn_cons_IC__ - subsequently learn 3 memory representations; intermediate consolidation for 8 hours after learning each individual assembly * __run_learn_cons_noLTD__ - subsequently learn 3 memory representations and let them consolidate for 8 hours; without LTD * __run_learn_cons_randweight__ - subsequently learn 3 memory representations and let them consolidate for 8 hours; with randomly initialized weights * __run_priming_and_activation__ - prime one of the assemblies in a previously consolidated network at a certain time and then simulate the activity for 3 minutes without plasticity (it is required to run __run_learn_cons_interleaved__ beforehand) * __run_recall__ - apply recall stimuli to the assemblies in a previously consolidated network and in a control network (it is required to run __run_learn_cons__ beforehand) -* __run_binary_misc__: +* __run_binary_paper3/__: + + * __run_raster_10s__ - learn a memory representation (with varied stimulation strength) and recall after 10 seconds + * __run_raster_8h__ - learn a memory representation (with varied stimulation strength), let it consolidate (with varied level of neuromodulation), and recall after 8 hours + * __run_nm_timing__ - learn a memory representation, let it consolidate (with low or high level of neuromodulation and varied timing), and recall after 8 hours + +* __run_binary_misc/__: + + * __run_2N1S_conv__ - tests the convergence of the neuronal membrane potential following current stimulation * __run_2N1S_Li2016__ - reproduce single-synapse data resulting from basic induction protocols for synaptic plasticity, with the same model as in Li, Kulvicius, Tetzlaff, PLOS ONE, 2016 - * __run_2N1S_minimal__ - minimal example of the induction of early-phase plasticity by a few pre-defined spikes + * __run_2N1S_minimal__ - minimal example of the induction of early-phase plasticity by a few pre-defined spikes, with stochastic plasticity dynamics + * __run_2N1S_minimal_det__ - minimal example of the induction of early-phase plasticity by a few pre-defined spikes, with deterministic plasticity dynamics * __run_activation_attractors__ - simulate the activity in a previously consolidated network for 3 minutes without plasticity (it is required to run __run_learn_cons_attractors__ beforehand) * __run_learn_cons_attractors__ - subsequently learn 3 attractor memory representations; consolidate for 8 hours after learning each assembly * __run_PFreq__ - network simulations to characterize plasticity regimes depending on pre- and post-synaptic firing rate + * __connections.txt__ - the default connectivity matrix used for paper 1 ## Analysis scripts -The following scripts serve to process and analyze the data produced by the simulation code. They were tested to run with Python 3.7.3, NumPy 1.20.1, SciPy 1.6.0, and pandas 1.0.3. +The following scripts, located in __analysis/__, serve to process and analyze the data produced by the simulation code. They were tested to run with Python 3.7.3, NumPy 1.20.1, SciPy 1.6.0, and pandas 1.0.3. Please note that some of the script files are interdependent. -Also note that not all script files and functions have to be used to reproduce the results of any single study mentioned above. +Also note that not all script files and functions have to be used to reproduce the results of any single study mentioned above (also see section "Interactive scripts" below). ### Files @@ -118,16 +140,27 @@ Also note that not all script files and functions have to be used to reproduce t * __assemblyAvalancheStatistics.py__ - determines the statistics of avalanche occurrence within cell assemblies * __averageFileColumnsAdvanced.py__ - averages data columns across files (for example, average over multiple weight traces or probability distributions) * __averageWeights-py__ - averages across multiple weight matrices -* __calculateMIa.py__ - calculates the mutual information from two firing rate distributions -* __calculateQ.py__ - calculates the pattern completion coefficient Q for an input-defined cell assembly from a firing rate distribution +* __calculateMIa.py__ - calculates the mutual information from two firing rate distributions (given either by `[timestamp]_net_[time].txt` files or by arrays) +* __calculateQ.py__ - calculates the pattern completion coefficient Q for an input-defined cell assembly (either from the firing rate distribution given by a `[timestamp]_net_[time].txt` file or from mean firing rates) * __computeRateFromSpikes.py__ - computes the firing rate over time via fixed time windows from spike raster data -* __extractParamsFiringRates.py__ - recursively extracts the mean firing rates of neuronal subpopulations along with the simulation parameters from directories containing simulation data (used to process many datasets for raster plots) -* __extractParamsMeanWeights.py__ - recursively extracts the mean weights across neuronal subpopulations along with the simulation parameters from directories containing simulation data (used to process many datasets for raster plots) -* __extractParamsProteins.py__ - recursively extracts the mean protein amount across core and non-core neurons along with the simulation parameters from directories containing simulation data (used to process many datasets for raster plots) -* __extractParamsQMI.py__ - recursively extracts the Q and MI measures along with the simulation parameters from directories containing simulation data (used to process many datasets for raster plots) +* __extractParamsFiringRates.py__ - recursively extracts the mean firing rates of neuronal subpopulations along with the simulation parameters from directories containing spike raster data (can be used to process many datasets) +* __extractParamsMeanWeights.py__ - recursively extracts the mean weights across neuronal subpopulations along with the simulation parameters from directories containing `[timestamp]_net_[time].txt` files (can be used to process many datasets) +* __extractParamsProteins.py__ - recursively extracts the mean protein amount across core and non-core neurons along with the simulation parameters from directories containing `[timestamp]_mean_weight.txt` files (can be used to process many datasets) +* __extractParamsQMI.py__ - recursively extracts the Q and MI measures along with the simulation parameters from directories containing `[timestamp]_net_[time].txt` files (can be used to process many datasets) +* __extractParamsQMIfromSpikes.py__ - recursively extracts the Q and MI measures along with the simulation parameters from directories containing spike raster data (can be used to process many datasets) * __frequencyAnalysisSpikeRaster.py__ - computes the frequency spectrum of spike raster data * __meanCorrelations.py__ - computes firing rate correlations for neuron pairs from spike raster data and averages over subpopulations of the network +* __nmAnalysisClass.py__ - generates temporal traces and analyzes their consolidation; produces most of the final data for paper 3 (an interactive frontend for this is found in __notebooks/__) * __numberOfSpikesInBins.py__ - computes the distribution of spikes per time bin from cell assembly time series data * __overlapParadigms.py__ - defines paradigms of overlapping cell assemblies * __utilityFunctions.py__ - diverse utility functions, e.g., to read firing rate, early- and late-phase weight data from `[timestamp]_net_[time].txt` files produced by the simulation program * __valueDistributions.py__ - functions to analyze and plot weight and firing rate distributions + + +## Interactive scripts + +The following subfolders of __notebooks/__ contain interactive Jupyter notebooks serving to reproduce the figures of a related paper. They were tested to run with Jupyter Lab 3.2.8 and 3.4.3. Raw data has to be used from provided sources or to be generated using the simulation code. + +### Files + +* __lehr_luboeinski_tetzlaff_2022/__ - reproduces figures of paper 3 diff --git a/analysis/assemblyAttractorStatistics.py b/analysis/assemblyAttractorStatistics.py index f41c464..91181a5 100644 --- a/analysis/assemblyAttractorStatistics.py +++ b/analysis/assemblyAttractorStatistics.py @@ -265,8 +265,8 @@ def spikeRasterPlot(timestamp, data_dir, output_dir, new_plots): fout = open("spike_raster.gpl", "w") fout.write("set term png enhanced font Sans 20 size 1280,960 lw 2.5\n") fout.write("set output '" + plot_file1 + "'\n\n") - fout.write("Ne = 2500\n") - fout.write("Ni = 625\n") + fout.write("Ne = " + str(exc_pop_size) + "\n") + fout.write("Ni = " + str(inh_pop_size) + "\n") fout.write("set xlabel 'Time (s)'\n") fout.write("unset ylabel\n") fout.write("set yrange [0:1]\n") diff --git a/analysis/assemblyAvalancheStatistics.py b/analysis/assemblyAvalancheStatistics.py index 896262b..da500f2 100644 --- a/analysis/assemblyAvalancheStatistics.py +++ b/analysis/assemblyAvalancheStatistics.py @@ -297,8 +297,8 @@ def spikeRasterPlot(timestamp, data_dir, output_dir, new_plots): fout = open("spike_raster.gpl", "w") fout.write("set term png enhanced font Sans 20 size 1280,960 lw 2.5\n") fout.write("set output '" + plot_file1 + "'\n\n") - fout.write("Ne = 2500\n") - fout.write("Ni = 625\n") + fout.write("Ne = " + str(exc_pop_size) + "\n") + fout.write("Ni = " + str(inh_pop_size) + "\n") fout.write("set xlabel 'Time (s)'\n") fout.write("unset ylabel\n") fout.write("set yrange [0:1]\n") diff --git a/analysis/calculateMIa.py b/analysis/calculateMIa.py index 45de371..bf564c3 100644 --- a/analysis/calculateMIa.py +++ b/analysis/calculateMIa.py @@ -15,17 +15,40 @@ np.set_printoptions(threshold=1e10, linewidth=200) # extend console print range for numpy arrays # calculateMIa -# Calculates mutual information of the activity distribution of the network at two timesteps and returns it along with the self-information of -# the reference distribution +# Calculates the mutual information and the self-information of the reference distribution from given firing rate distributions (reference/learning, and recall) +# v_ref: the reference activity distribution (or the activity distribution during learning) +# v_recall: the activity distribution during recall +# return: mutual information between reference/learning distribution and recall distribution, and self-information of reference distribution and recall distirbution +def calculateMIa(v_ref, v_recall): + + margEntropyActL = marginalEntropy(v_ref) + print("margEntropyActL = " + str(margEntropyActL)) + + margEntropyActR = marginalEntropy(v_recall) + print("margEntropyActR = " + str(margEntropyActR)) + + jointEntropyAct = jointEntropy(v_ref, v_recall) + print("jointEntropyAct = " + str(jointEntropyAct)) + + MIa = mutualInformation(margEntropyActL, margEntropyActR, jointEntropyAct) + print("MIa = " + str(MIa)) + + return MIa, margEntropyActL, margEntropyActR + +# calculateMIaFromFile +# Calculates mutual information of the activity distribution of the network at two timesteps, and the self-information of +# the reference distribution; reads recall distribution from file; reads reference distribution either from file or uses a model # nppath: path to the network_plots directory to read the data from # timestamp: a string containing date and time (to access correct paths) # Nl_exc: the number of excitatory neurons in one line of a quadratic grid -# time_for_activity: the time that at which the activites shall be read out (some time during recall) -# time_ref: the reference time (for getting the activity distribution during learning) -# core: the neurons in the cell assembly (for stipulation; only required if no activity distribution during learning is available) -def calculateMIa(nppath, timestamp, Nl_exc, time_for_activity, time_ref = "11.0", core = np.array([])): - - if time_ref: # use reference firing rate distribution from data (for learned cell assembly) +# time_for_activity [optional]: the time that at which the activites shall be read out (some time during recall) +# time_ref [optional]: the reference time (for getting the activity distribution during learning) +# core [optional]: the neurons in the cell assembly (for stipulation; only required if no activity distribution during learning is available) +# return: mutual information between reference/learning distribution and recall distribution, and self-information of reference distribution +def calculateMIaFromFile(nppath, timestamp, Nl_exc, time_for_activity = "20.1", time_ref = "11.0", core = np.array([])): + + ### Get data ### + if time_ref: # if provided, use reference firing rate distribution from file (for learned cell assembly) times_for_readout_list = [time_ref, time_for_activity] # the simulation times at which the activities shall be read out print("Using reference distribution at " + time_ref + "...") else: # use model firing rate distribution (for stipulated cell assembly) @@ -40,8 +63,6 @@ def calculateMIa(nppath, timestamp, Nl_exc, time_for_activity, time_ref = "11.0" z = [np.zeros((Nl_exc**2,Nl_exc**2)) for x in times_for_readout_list] v = [np.zeros((Nl_exc,Nl_exc)) for x in times_for_readout_list] - v_array = np.zeros(Nl_exc*Nl_exc) # data array - rawpaths = Path(nppath) for i in range(len(times_for_readout_list)): @@ -68,32 +89,14 @@ def calculateMIa(nppath, timestamp, Nl_exc, time_for_activity, time_ref = "11.0" except OSError: raise - ### Activity ### - if time_ref: # use reference firing rate distribution from data (for learned cell assembly) - margEntropyActL = marginalEntropy(v[0]) - print("margEntropyActL = " + str(margEntropyActL)) - - margEntropyActR = marginalEntropy(v[1]) - print("margEntropyActR = " + str(margEntropyActR)) - - jointEntropyAct = jointEntropy(v[0],v[1]) - print("jointEntropyAct = " + str(jointEntropyAct)) + ### Get results ### + if time_ref: # if provided, use reference firing rate distribution from file (for learned cell assembly) + MIa, self_MIa_ref = calculateMIa(v[0], v[1]) else: # use model firing rate distribution (for stipulated cell assembly) - margEntropyActL = marginalEntropy(v_model) - print("margEntropyActL = " + str(margEntropyActL)) - - margEntropyActR = marginalEntropy(v[0]) - print("margEntropyActR = " + str(margEntropyActR)) - - jointEntropyAct = jointEntropy(v_model,v[0]) - print("jointEntropyAct = " + str(jointEntropyAct)) - - ### Results and Output ### - MIa = mutualInformation(margEntropyActL, margEntropyActR, jointEntropyAct) - print("MIa = " + str(MIa)) + MIa, self_MIa_ref = calculateMIa(v_model, v[0]) - return MIa, margEntropyActL + return MIa, self_MIa_ref # marginalEntropy # Computes the marginal entropy of an array diff --git a/analysis/calculateQ.py b/analysis/calculateQ.py index fd06fed..53eab74 100644 --- a/analysis/calculateQ.py +++ b/analysis/calculateQ.py @@ -14,21 +14,52 @@ np.set_printoptions(threshold=1e10, linewidth=200) # extend console print range for numpy arrays # calculateQ -# Calculates Q measure for how well recall works +# From given mean firing rates of subpopulations, calculates the Q value +# that measures how well recall of an input-defined pattern works +# v_as: mean firing rate of the neuronal subpopulation that receives learning and recall stimulation +# v_as_err: uncertainty of v_as +# v_ans: mean firing rate of the neuronal subpopulation that receives learning but no recall stimulation +# v_ans_err: uncertainty of v_ans +# v_ctrl: mean firing rate of the neuronal subpopulation that receives neither learning nor recall stimulation +# v_ctrl_err: uncertainty of v_ctrl +def calculateQ(v_as, v_as_err, v_ans, v_ans_err, v_ctrl, v_ctrl_err): + + Q = (v_ans - v_ctrl) / v_as + + Q_err = np.sqrt( np.power( v_ans_err / v_as, 2 ) + np.power( v_ctrl_err / v_as, 2 ) \ + + np.power (v_as_err * (v_ans - v_ctrl) / np.power(v_as, 2), 2 ) ) # error propagated from v_as, v_ans, v_ctrl + + return Q, Q_err + +# getSubpopulations +# Retrieves the indices of the neurons belonging to the different network subpopulations +# N_pop: the number of neurons in the whole population +# core: array of the neurons belonging to the stimulated core +# p_r: the fraction of core neurons that are stimulated for recall +# return: arrays of neuron indices of network subpopulations that receive 1. learning and recall stimulation, 2. learning but no recall stimulation, 3. neither learning nor recall stimulation +def getSubpopulations(N_pop, core, p_r): + + core_recall = core[0:int(np.floor(float(p_r)*core.shape[0]))] + core_norecall = core[np.logical_not(np.in1d(core, core_recall))] + control = np.delete(np.arange(N_pop), core) + + return core_recall, core_norecall, control + +# calculateMeanRatesAndQ +# Calculates the mean firing rates of subpopulations from given firing rate distribution across a network, +# then calculates the Q value that measures how well recall of an input-defined pattern works # nppath: path to the network_plots directory to read the data from # timestamp: a string containing date and time (to access correct paths) # core: array of the neurons belonging to the stimulated core # Nl_exc: the number of excitatory neurons in one line of a quadratic grid -# time_for_activity: the time that at whcih the activites shall be read out (some time during recall) +# time_for_activity: the time that at which the activities shall be read out (some time during recall) # recall_fraction: the fraction of core neurons that are activated for recall -def calculateQ(nppath, timestamp, core, Nl_exc, time_for_activity, recall_fraction): - - core_recall = core[0:int(np.floor(float(recall_fraction)*core.shape[0]))] - core_norecall = core[np.logical_not(np.in1d(core, core_recall))] - control = np.delete(np.arange(Nl_exc*Nl_exc), core) +def calculateMeanRatesAndQ(nppath, timestamp, core, Nl_exc, time_for_activity, recall_fraction): path = "" + core_recall, core_norecall, control = getSubpopulations(Nl_exc*Nl_exc, core, recall_fraction) + v_array = np.zeros(Nl_exc*Nl_exc) # data array # look for data file [timestamp]_net_[time_for_activity].txt @@ -79,9 +110,6 @@ def calculateQ(nppath, timestamp, core, Nl_exc, time_for_activity, recall_fracti v_ans_err = np.sqrt(np.sum(np.power(v_array[core_norecall], 2)) / len(core_norecall) - np.power(v_ans, 2)) v_ctrl_err = np.sqrt(np.sum(np.power(v_array[control], 2)) / len(control) - np.power(v_ctrl, 2)) - Q = (v_ans - v_ctrl) / v_as - - Q_err = np.sqrt( np.power( v_ans_err / v_as, 2 ) + np.power( v_ctrl_err / v_as, 2 ) \ - + np.power (v_as_err * (v_ans - v_ctrl) / np.power(v_as, 2), 2 ) ) # error propagated from v_as, v_ans, v_ctrl + Q, Q_err = calculateQ(v_as, v_as_err, v_ans, v_ans_err, v_ctrl, v_ctrl_err) return Q, Q_err, v_as, v_as_err, v_ans, v_ans_err, v_ctrl, v_ctrl_err diff --git a/analysis/computeRateFromSpikes.py b/analysis/computeRateFromSpikes.py index 66f9188..8bd7325 100644 --- a/analysis/computeRateFromSpikes.py +++ b/analysis/computeRateFromSpikes.py @@ -14,11 +14,16 @@ np.set_printoptions(threshold=1e10, linewidth=200) # extend console print range for numpy arrays # main parameters -Ne = 2500 # number of neurons in the excitatory population +'''Ne = 2500 # number of neurons in the excitatory population Ni = 625 # number of neurons in the inhibitory population Na = 600 # number of neurons in one cell assembly overlap = 0.1 # relative size of the overlap -period_duration = 0.01 # binning period (in units of seconds) +period_duration = 0.01 # binning period (in units of seconds)''' +Ne = 1600 # number of neurons in the excitatory population +Ni = 400 # number of neurons in the inhibitory population +Na = 150 # number of neurons in one cell assembly +overlap = 0 # relative size of the overlap +period_duration = 0.1 # binning period (in units of seconds) # derived parameters Nl = int(round(np.sqrt(Ne))) # the number of excitatory neurons in one line of a quadratic grid @@ -31,7 +36,8 @@ # Recursively looks for a data files, extracts spiking data from them and computes mean firing rates # directory: the directory to look into # fout: file handle to output file -def extractRecursion(directory, fout): +# col_sep [optional]: characters separating columns in the data file +def extractRecursion(directory, fout, col_sep = '\t\t'): data_found = False # specifies if any data has been found rawpaths = Path(directory) @@ -46,8 +52,10 @@ def extractRecursion(directory, fout): full_path = str(x) (full_path_head, filename) = os.path.split(full_path) - if x.is_file() and "_spike_raster.txt" in filename: + if x.is_file() and ("_spikes.txt" in filename or "_spike_raster.txt" in filename): + if "_spikes.txt" in filename: # Arbor data + col_sep = ' ' data_found = True print("========================") @@ -57,11 +65,20 @@ def extractRecursion(directory, fout): # read the last line and compute number of periods try: with open(full_path, 'rb') as f: + first_line = f.readline().decode() f.seek(-2, os.SEEK_END) while f.read(1) != b'\n': # seek last line f.seek(-2, os.SEEK_CUR) last_line = f.readline().decode() - num_periods_tot = int(float(last_line.split('\t\t')[0]) / period_duration) + 1 + #num_periods_tot = int(float(last_line.split(col_sep)[0]) / period_duration) + 1 + t_first = float(first_line.split(col_sep)[0]) + t_shift = abs(t_first) if t_first < 0 else 0 # shift if there have been spikes at negative times + + if "_spikes.txt" in filename: # Arbor data + num_periods_tot = int(np.ceil((float(last_line.split(col_sep)[0]) + t_shift) / 1000 / period_duration)) + 1 + else: + num_periods_tot = int(np.ceil((float(last_line.split(col_sep)[0]) + t_shift) / period_duration)) + 1 + except IOError: print('Error opening "' + filename + '"') exit() @@ -86,10 +103,12 @@ def extractRecursion(directory, fout): f = open(full_path) for line in f: - segs = line.split('\t\t') + segs = line.split(col_sep) if (segs[0] != ""): t = float(segs[0]) + if "_spikes.txt" in filename: # Arbor data + t /= 1000 # convert ms to s n = int(segs[1]) current_period = int(np.floor(t / period_duration)) tot_spikes[current_period] += 1 @@ -133,10 +152,10 @@ def extractRecursion(directory, fout): mean_tot = tot_spikes[i] / N / period_duration # write time (at 1/2 of a period) and values for this period - fout.write(str(round((i+0.5)*period_duration,4)) + "\t\t" + \ - str(mean_A) + "\t\t" + str(mean_I_AB) + "\t\t" + str(mean_B) + "\t\t" + \ - str(mean_I_AC) + "\t\t" + str(mean_I_BC) + "\t\t" + str(mean_I_ABC) + "\t\t" + str(mean_C) + "\t\t" + \ - str(mean_ctrl) + "\t\t" + str(mean_inh) + "\t\t" + str(mean_tot) + "\n") + fout.write(str(round((i+0.5)*period_duration,4)) + col_sep + \ + str(mean_A) + col_sep + str(mean_I_AB) + col_sep + str(mean_B) + col_sep + \ + str(mean_I_AC) + col_sep + str(mean_I_BC) + col_sep + str(mean_I_ABC) + col_sep + str(mean_C) + col_sep + \ + str(mean_ctrl) + col_sep + str(mean_inh) + col_sep + str(mean_tot) + "\n") elif x.is_dir(): diff --git a/analysis/extractParamsFiringRates.py b/analysis/extractParamsFiringRates.py index e578690..af77156 100644 --- a/analysis/extractParamsFiringRates.py +++ b/analysis/extractParamsFiringRates.py @@ -13,19 +13,19 @@ from shutil import copyfile import os import traceback -from utilityFunctions import readParams +from utilityFunctions import readParams, hasTimestamp np.set_printoptions(threshold=1e10, linewidth=200) # extend console print range for numpy arrays -Nl = 50 # the number of excitatory neurons in one line of a quadratic grid -Ne = 2500 -Ni = 625 -N = 3125 # the number of neurons in the whole network +Ne = 1600 # the number of excitatory neurons +Ni = 400 # the number of inhibitory neurons +N = Ne + Ni # the number of neurons in the whole network # extractRecursion # Recursively looks for data directories and extracts parameters and the Q measure from them # directory: the directory to look in # fout: file handle to output file -def extractRecursion(directory, fout): +# col_sep [optional]: characters separating columns in the data file +def extractRecursion(directory, fout, col_sep = '\t\t'): data_found = False # specifies if any data has been found rawpaths = Path(directory) @@ -44,7 +44,7 @@ def extractRecursion(directory, fout): full_path = str(x) path_tail = os.path.split(full_path)[1] # extract the folder name - if hasTimestamp(path): + if hasTimestamp(path_tail): data_found = True @@ -58,13 +58,15 @@ def extractRecursion(directory, fout): print("------------------------") params = readParams(full_path + os.sep + timestamp + "_PARAMS.txt") - core = np.arange(params[12]) # define the cell assembly + # define the cell assembly + Na = params[12] + core = np.arange(Na) # write the parameter values to the output file - fout.write(timestamp + "\t\t") + fout.write(timestamp + col_sep) for i in range(len(params)): - fout.write(str(params[i]) + "\t\t") + fout.write(str(params[i]) + col_sep) # read out *_spike_raster.txt: frpath = full_path + os.sep + timestamp + "_spike_raster.txt" @@ -83,7 +85,7 @@ def extractRecursion(directory, fout): tot_spikes = np.zeros(len(tws), dtype=int) # number of spikes in the whole network for line in f: - segs = line.split('\t\t') + segs = line.split(col_sep) if (segs[0] != ""): t = float(segs[0]) @@ -108,14 +110,14 @@ def extractRecursion(directory, fout): f.close() for j in range(len(tws)): - #fout.write(str(CA_spikes[j] / Na / 0.5) + "\t\t" + str(c_spikes[j] / (Ne-Na) / 0.5) + "\t\t" + str(inh_spikes[j] / Ni / 0.5) + \ - # "\t\t" + str(tot_spikes[j] / N / 0.5) + "\n") - fout.write(str(CA_spikes[j] / Na / 0.5) + "\t\t" + str(c_spikes[j] / (Ne-Na) / 0.5) + "\t\t") + #fout.write(str(CA_spikes[j] / Na / 0.5) + col_sep + str(c_spikes[j] / (Ne-Na) / 0.5) + col_sep + str(inh_spikes[j] / Ni / 0.5) + \ + # col_sep + str(tot_spikes[j] / N / 0.5) + "\n") + fout.write(str(CA_spikes[j] / Na / 0.5) + col_sep + str(c_spikes[j] / (Ne-Na) / 0.5) + col_sep) fout.write("\n") else: - ret = extractRecursion(directory + os.sep + path, fout) + ret = extractRecursion(directory + os.sep + path_tail, fout) data_found = data_found or ret return data_found diff --git a/analysis/extractParamsQMI.py b/analysis/extractParamsQMI.py index e1b8c38..2972220 100644 --- a/analysis/extractParamsQMI.py +++ b/analysis/extractParamsQMI.py @@ -22,14 +22,16 @@ ref_time = "11.0" # readout time for the reference firing rate or weight distribution (typically during learning) # extractRecursion -# Recursively looks for data directories and extracts parameters and the Q and MI measures from them -# For 8h-recall simulation data that itself does not contain the data from during learing, the related data -# from during learning has to be available, with the same description and an earlier timestamp! +# Recursively looks for data directories and extracts parameters and the Q and MI measures from them. +# For data of 8h-recall simulations that itself does not contain the learning process, the related data +# from during learning has to be available under the timestamp immediately previous to the timestamp +# of the 8h-recall simulation! # directory: the directory to look in # fout: file handle to output file def extractRecursion(directory, fout): data_found = False # specifies if any data has been found + prev_path_tail = None rawpaths = Path(directory) MI = [] @@ -51,18 +53,17 @@ def extractRecursion(directory, fout): data_found = True print("========================") - if "_TRIPLET" in path_tail: # presumed simulation that contains the learning process and 10s-recall - [timestamp, prev_desc] = path_tail.split("_TRIPLET", 1) + if "_TRIPLET" in path_tail: # presume simulation that contains the learning process and 10s-recall + timestamp = path_tail.split("_TRIPLET", 1)[0] prev_full_path = full_path # to copy file containing the reference data to another data folder - prev_path = path_tail - prev_timestamp = timestamp - prev_desc = prev_desc.split(" ", 1)[1] # extract description, to later link with a data folder for 8h-recall - else: # presumed simulation that contains 8h-recall - [timestamp, desc] = path_tail.split(" ", 1) # extract timestamp and description, to link with a previous data folder for learning - if desc == prev_desc: # temporarily copy reference data file if the state of this simulation has been loaded from previous one + prev_path_tail = path_tail + else: # presume simulation that contains 8h-recall, following a previous simulation containing the learning process + timestamp = path_tail.split(" ", 1)[0] + if prev_path_tail is not None: # temporarily copy reference data file if the state of this simulation has been loaded from previous one + src_file = prev_full_path + os.sep + "network_plots" + os.sep + prev_path_tail.split("_TRIPLET", 1)[0] + "_net_" + ref_time + ".txt" dest_file = full_path + os.sep + "network_plots" + os.sep + timestamp + "_net_" + ref_time + ".txt" - copyfile(prev_full_path + os.sep + "network_plots" + os.sep + prev_timestamp + "_net_" + ref_time + ".txt", \ - dest_file) + copyfile(src_file, dest_file) + print("Copying learning data from '" + prev_path_tail + "'...") else: print("No previous data folder found, errors may occur...") @@ -88,7 +89,7 @@ def extractRecursion(directory, fout): # compute the Q value try: - Q, Q_err, v_as, v_as_err, v_ans, v_ans_err, v_ctrl, v_ctrl_err = calculateQ(full_path + os.sep + "network_plots" + os.sep, timestamp, core, Nl_exc, readout_time, params[16]) + Q, Q_err, v_as, v_as_err, v_ans, v_ans_err, v_ctrl, v_ctrl_err = calculateMeanRatesAndQ(full_path + os.sep + "network_plots" + os.sep, timestamp, core, Nl_exc, readout_time, params[16]) except OSError as e: print(traceback.format_exc()) except ValueError as e: @@ -102,9 +103,9 @@ def extractRecursion(directory, fout): # compute the MI and selfMI value try: if "STIP" in params[8]: # stipulated CA - MI, selfMI = calculateMIa(full_path + os.sep + "network_plots" + os.sep, timestamp, Nl_exc, readout_time, "", core) + MI, selfMI = calculateMIaFromFile(full_path + os.sep + "network_plots" + os.sep, timestamp, Nl_exc, readout_time, "", core) else: # learned CA - MI, selfMI = calculateMIa(full_path + os.sep + "network_plots" + os.sep, timestamp, Nl_exc, readout_time, ref_time) + MI, selfMI = calculateMIaFromFile(full_path + os.sep + "network_plots" + os.sep, timestamp, Nl_exc, readout_time, ref_time) except OSError as e: print(traceback.format_exc()) except ValueError as e: diff --git a/analysis/extractParamsQMIfromSpikes.py b/analysis/extractParamsQMIfromSpikes.py new file mode 100644 index 0000000..f19a3b0 --- /dev/null +++ b/analysis/extractParamsQMIfromSpikes.py @@ -0,0 +1,243 @@ +###################################################################################### +### Recursively moves through directories to extract parameters, firing rates, and ### +### the recall quality measures Q and MI from spike raster data ### +###################################################################################### + +### Copyright 2018-2022 Jannik Luboeinski +### licensed under Apache-2.0 (http://www.apache.org/licenses/LICENSE-2.0) +### Contact: jannik.lubo[at]gmx.de + +import numpy as np +import os +import traceback +from utilityFunctions import hasTimestamp, readParams +from calculateQ import getSubpopulations, calculateQ +from calculateMIa import calculateMIa +from pathlib import Path +from shutil import copyfile +import json + +np.set_printoptions(threshold=1e10, linewidth=200) # extend console print range for numpy arrays +Ne = 1600 # the number of excitatory neurons +Ni = 400 # the number of inhibitory neurons +N = Ne + Ni # the number of neurons in the whole network +time_window_size = 0.5 # duration of the time window for firing rate computation (in seconds) + +# extractRecursion +# Recursively looks for directories with spike raster data and extracts parameters, firing rates, and the recall performance measures Q and MI +# sup_directory: the directory to look in +# fout: file handle to output file +# col_sep [optional]: characters separating columns in the data file +def extractRecursion(sup_directory, fout, col_sep = '\t\t'): + + data_found = False # specifies if any data has been found + rawpaths = Path(sup_directory) + MI = [] + + print("Contents of directory " + sup_directory) + print([str(x) for x in rawpaths.iterdir()]) + rawpaths = Path(sup_directory) + + for x in sorted(rawpaths.iterdir()): # loop through elements in this directory + + dest_file = "" + + full_path = str(x) + path_tail = os.path.split(full_path)[1] # extract the filename + + if hasTimestamp(path_tail) \ + and ("_spikes.txt" in path_tail or "_spike_raster.txt" in path_tail): + + data_found = True + if "_spikes.txt" in path_tail: + arbor = True + col_sep = ' ' + else: + arbor = False + col_sep = '\t\t' + + [timestamp, _] = path_tail.split("_spike", 1) + + print("========================") + print(timestamp + " in " + sup_directory) + print("------------------------") + + # setting the parameters + if arbor: + # load parameter configuration from JSON file + config = json.load(open(os.path.join(sup_directory, timestamp + "_config.json"), "r")) + Na = config['populations']['N_CA'] + p_r = config['populations']['p_r'] + + # define the readout time for learning + readout_time_0 = float(config['simulation']['learn_protocol']['time']) + 1.0 + print("readout_time_0 =", readout_time_0) + + # define the readout time for recall + readout_time_1 = float(config['simulation']['recall_protocol']['time']) + 0.1 + print("readout_time_1 =", readout_time_1) + + # get values of important parameters + w_ei = config['populations']['w_ei'] + w_ie = config['populations']['w_ie'] + w_ii = config['populations']['w_ii'] + + else: + # load parameter configuration from *_PARAMS.txt file + params = readParams(os.path.join(sup_directory, timestamp + "_PARAMS.txt")) + Na = int(params[12]) + p_r = float(params[16]) + + # define the readout time for learning/reference + if len(params[8].split("at")) > 2: + readout_time_0 = float(params[8].split("at")[1]) + 1.0 # 1.0 sec. after onset of learning stimulus + else: + readout_time_0 = 11.0 # default value if not specified + print("readout_time_0 =", readout_time_0) + + # define the readout time for recall + readout_time_1 = float(params[9].split("at")[1]) + 0.1 # 0.1 sec. after onset of recall stimulus + print("readout_time_1 =", readout_time_1) + + # get values of important parameters + w_ei = params[0] + w_ie = params[1] + w_ii = params[2] + + # define the cell assembly (the subpopulation that receives learning stimulation) + core = np.arange(Na) + + # define the subpopulations of the network with resepect to recall + core_as, core_ans, control = getSubpopulations(Ne, core, p_r) + Nas = len(core_as) + Nans = len(core_ans) + + # write the values of important parameters to the output file + fout.write(timestamp + col_sep + \ + str(Na) + col_sep + \ + str(w_ei) + col_sep + \ + str(w_ie) + col_sep + \ + str(w_ii) + col_sep + \ + str(readout_time_1) + col_sep) + + # read out *_spikes.txt / *_spike_raster.txt: + try: + f = open(full_path) + + except IOError: + print('Error opening "' + full_path + '"') + exit() + + # read all spikes from file and count spikes for the different subpopulations and for the individual excitatory neurons and + time_windows = [ [readout_time_0-time_window_size/2, readout_time_0+time_window_size/2], \ + [readout_time_1-time_window_size/2, readout_time_1+time_window_size/2] ] + as_spikes = np.zeros(len(time_windows), dtype=int) # number of spikes in assembly core neurons that receive recall stimulation + ans_spikes = np.zeros(len(time_windows), dtype=int) # number of spikes in assembly core neurons that do not receive recall stimulation + ctrl_spikes = np.zeros(len(time_windows), dtype=int) # number of spikes in the exc. control population + exc_spikes = np.zeros(len(time_windows), dtype=int) # number of spikes in the exc. population + inh_spikes = np.zeros(len(time_windows), dtype=int) # number of spikes in the inh. population + tot_spikes = np.zeros(len(time_windows), dtype=int) # number of spikes in the whole network + exc_spikes_individual = np.zeros((len(time_windows), Ne), dtype=int) # number of spikes for each neuron + + for line in f: + segs = line.split(col_sep) + + if (segs[0] != ""): + if arbor: + t = float(segs[0].lstrip("\x00")) / 1000 # convert ms to s + else: + t = float(segs[0].lstrip("\x00")) + + for j in range(len(time_windows)): + if t >= time_windows[j][0] and t < time_windows[j][1]: # time window for firing rate + n = int(segs[1]) + + tot_spikes[j] += 1 + + if n < Ne: # excitatory population + exc_spikes[j] += 1 + exc_spikes_individual[j][n] += 1 + if n < Nas: # assembly core, stimulated for recall + as_spikes[j] += 1 + elif n < Na: # assembly core, not stimulated for recall + ans_spikes[j] += 1 + elif n < Ne: # control + ctrl_spikes[j] += 1 + elif n < N: # inhibitory population + inh_spikes[j] += 1 + else: + print('Error reading from "' + full_path + '"') + + break # exit the for-loop + + f.close() + + # determine activities for the individual excitatory neurons for each readout time (the activity distributions) + v_exc_individual = exc_spikes_individual / time_window_size + + # determine mean activities (and standard deviations) for the subpopulations at recall (i.e., at 'readout_time_1') + v_exc = exc_spikes[1] / Ne / time_window_size # mean firing rate of exc. population + v_as = as_spikes[1] / Nas / time_window_size # mean firing rate of exc. subpopulation that receives learning and recall stimulation + v_ans = ans_spikes[1] / Nans / time_window_size # mean firing rate of exc. subpopulation that receives learning but no recall stimulation + v_ctrl = ctrl_spikes[1] / (Ne-Na) / time_window_size # mean firing rate of exc. subpopulation that receives neither learning nor recall stimulation + v_inh = inh_spikes[1] / Ni / time_window_size # mean firing rate of inh. population + + v_exc_err = np.sqrt(np.sum(np.power(v_exc_individual, 2)) / Ne - np.power(v_exc, 2)) + v_as_err = np.sqrt(np.sum(np.power(v_exc_individual[1][core_as], 2)) / len(core_as) - np.power(v_as, 2)) + v_ans_err = np.sqrt(np.sum(np.power(v_exc_individual[1][core_ans], 2)) / len(core_ans) - np.power(v_ans, 2)) + v_ctrl_err = np.sqrt(np.sum(np.power(v_exc_individual[1][control], 2)) / len(control) - np.power(v_ctrl, 2)) + v_inh_err = np.nan # no distribution computed to compute this value (TODO?) + + # compute the Q value from mean activities at recall + try: + Q, Q_err = calculateQ(v_as, v_as_err, v_ans, v_ans_err, v_ctrl, v_ctrl_err) + except OSError as e: + print(traceback.format_exc()) + except ValueError as e: + print(traceback.format_exc()) + else: + print("v_exc = " + str(v_exc) + " +- " + str(v_exc_err)) + print("v_as = " + str(v_as) + " +- " + str(v_as_err)) + print("v_ans = " + str(v_ans) + " +- " + str(v_ans_err)) + print("v_ctrl = " + str(v_ctrl) + " +- " + str(v_ctrl_err)) + print("v_inh = " + str(v_inh) + " +- " + str(v_inh_err)) + print("Q = " + str(Q) + " +- " + str(Q_err)) + + # compute the MI value (and entropies) from the activity distributions at learning and recall + try: + MI, selfMIL, selfMIR = calculateMIa(v_exc_individual[0], v_exc_individual[1]) + except OSError as e: + print(traceback.format_exc()) + except ValueError as e: + print(traceback.format_exc()) + else: + print("Norm. MIa = " + str(MI / selfMIL)) # MI normalized by the self-information of the reference distribution''' + + # write Q and MI value to the output file + fout.write(str(v_exc) + col_sep + str(v_exc_err) + col_sep + \ + str(v_as) + col_sep + str(v_as_err) + col_sep + str(v_ans) + col_sep + str(v_ans_err) + col_sep + str(v_ctrl) + col_sep + str(v_ctrl_err) + col_sep + \ + str(v_inh) + col_sep + str(v_inh_err) + col_sep) + fout.write(str(Q) + col_sep + str(Q_err) + col_sep) + fout.write(str(MI) + col_sep + str(selfMIL) + col_sep + str(selfMIR)) + + fout.write("\n") + + elif x.is_dir(): # recurse into the next directory + ret = extractRecursion(sup_directory + os.sep + path_tail, fout) + data_found = data_found or ret + + return data_found + +# main + +try: + fout = open("Params_Q_MI.txt", "a") + +except IOError: + print('Error opening "Params_Q_MI.txt"') + exit() + +if extractRecursion('.', fout): + print("========================") + +fout.close() diff --git a/analysis/utilityFunctions.py b/analysis/utilityFunctions.py index 57b1fe4..b7d5ba8 100644 --- a/analysis/utilityFunctions.py +++ b/analysis/utilityFunctions.py @@ -4,7 +4,6 @@ ### Copyright 2017-2022 Jannik Luboeinski, Andrew Lehr ### licensed under Apache-2.0 (http://www.apache.org/licenses/LICENSE-2.0) -### Contact: jannik.lubo[at]gmx.de import numpy as np import os @@ -192,9 +191,15 @@ def readParams(path): # filename: a string # return: true if presumably there is a timestamp, false if not def hasTimestamp(filename): + try: - if filename[2] == "-" and filename[5] == "-" and filename[8] == "_" and \ - filename[11] == "-" and filename[14] == "-": + offset = 0 # number of characters after which the timestamp begins + for prefix in ["data_", "tmp_data_"]: + if filename[0:len(prefix)] == prefix: + offset = len(prefix) + + if filename[offset+2] == "-" and filename[offset+5] == "-" and filename[offset+8] == "_" and \ + filename[offset+11] == "-" and filename[offset+14] == "-": return True except: pass @@ -259,4 +264,4 @@ def __init__(self): self.path = defaultdict(dict) self.n_files = None self.which = None - self.time_required = None \ No newline at end of file + self.time_required = None diff --git a/simulation-bin/run_binary_misc/run_2N1S_conv b/simulation-bin/run_binary_misc/run_2N1S_conv new file mode 100644 index 0000000..be2f189 --- /dev/null +++ b/simulation-bin/run_binary_misc/run_2N1S_conv @@ -0,0 +1,9 @@ +#!/bin/sh + +# Neuron that receives a constant current such that its membrane converges to a depolarized state +./2N1S_conv.out -t_max=0.2 -learn= -I_0=0.15 -Ca_pre=1.0 -Ca_post=0.2758 -purpose="Const. current convergence" + +# Neuron that is first stimulated with a constant current such that its membrane converges to a depolarized state, and then +# relaxes back to the reversal potential +./2N1S_conv.out -t_max=0.2 -learn="MINCONV" -N_stim=1 -I_0=0.0 -Ca_pre=1.0 -Ca_post=0.2758 -purpose="Const. current convergence" + diff --git a/simulation-bin/run_binary_misc/run_2N1S_minimal_det b/simulation-bin/run_binary_misc/run_2N1S_minimal_det new file mode 100644 index 0000000..888d33c --- /dev/null +++ b/simulation-bin/run_binary_misc/run_2N1S_minimal_det @@ -0,0 +1,9 @@ +#!/bin/sh + +mkdir 2N1S_minimal_det +cd 2N1S_minimal_det +cp ../2N1S_minimal.out . + +./2N1S_minimal.out -learn=MIN2N1S -Ca_pre=1.0 -Ca_post=0.2758 -purpose= -t_max=0.2 -sigma_plasticity=0 + +cd .. diff --git a/simulation-bin/run_binary_paper1/run_varied_inhibition b/simulation-bin/run_binary_paper1/run_varied_inhibition new file mode 100644 index 0000000..829303e --- /dev/null +++ b/simulation-bin/run_binary_paper1/run_varied_inhibition @@ -0,0 +1,3 @@ +#!/bin/sh + + diff --git a/simulation-bin/run_binary_paper1/run_sizes b/simulation-bin/run_binary_paper1/run_varied_size similarity index 100% rename from simulation-bin/run_binary_paper1/run_sizes rename to simulation-bin/run_binary_paper1/run_varied_size diff --git a/simulation-bin/run_binary_paper2/recall/runner b/simulation-bin/run_binary_paper2/recall/runner deleted file mode 100644 index 9d551d2..0000000 --- a/simulation-bin/run_binary_paper2/recall/runner +++ /dev/null @@ -1,28 +0,0 @@ -#!/bin/sh -organization_dir="../organization/" - -### NOOVERLAP ### -cd NOOVERLAP -cp "../netA.out" . -cp "../netB.out" . -cp "../netC.out" . -cp "../../../../analysis/overlapParadigms.py" . -cp "../../../../analysis/utilityFunctions.py" . -cp "../../../../analysis/assemblyAvalancheStatistics.py" . -cp "../$organization_dir/3rd/NOOVERLAP/connections.txt" . -cp "../$organization_dir/3rd/NOOVERLAP/"*/*/*"_net_28810.0.txt" coupling_strengths.txt -screen -d -m /bin/sh run - -### Control ### -cd ../Control -cp "../netA.out" . -cp "../netB.out" . -cp "../netC.out" . -cp "../../../../analysis/overlapParadigms.py" . -cp "../../../../analysis/utilityFunctions.py" . -cp "../../../../analysis/assemblyAvalancheStatistics.py" . -cp "../$organization_dir/1st/FIRST/connections.txt" . -cp "../$organization_dir/1st/FIRST/"*/*/*"_net_0.0.txt" coupling_strengths.txt -screen -d -m /bin/sh run - -cd .. diff --git a/simulation-bin/run_binary_paper2/run_recall b/simulation-bin/run_binary_paper2/run_recall index e1fbee6..968c631 100644 --- a/simulation-bin/run_binary_paper2/run_recall +++ b/simulation-bin/run_binary_paper2/run_recall @@ -1,7 +1,31 @@ #!/bin/sh -# copies data from learning/consolidation simulations in 'organization/' and applies recall stimulation to a fraction of the neurons of each assembly +# first copies data from learning/consolidation simulations in 'organization/' and then simulates recall through stimulation of a fraction of the neurons of each assembly cd recall/ -/bin/sh "runner" -cd .. + +### NOOVERLAP ### +cd NOOVERLAP +cp "../netA.out" . +cp "../netB.out" . +cp "../netC.out" . +cp "../../../../analysis/overlapParadigms.py" . +cp "../../../../analysis/utilityFunctions.py" . +cp "../../../../analysis/assemblyAvalancheStatistics.py" . +cp "../$organization_dir/3rd/NOOVERLAP/connections.txt" . +cp "../$organization_dir/3rd/NOOVERLAP/"*/*/*"_net_28810.0.txt" coupling_strengths.txt +screen -d -m /bin/sh run + +### Control ### +cd ../Control +cp "../netA.out" . +cp "../netB.out" . +cp "../netC.out" . +cp "../../../../analysis/overlapParadigms.py" . +cp "../../../../analysis/utilityFunctions.py" . +cp "../../../../analysis/assemblyAvalancheStatistics.py" . +cp "../$organization_dir/1st/FIRST/connections.txt" . +cp "../$organization_dir/1st/FIRST/"*/*/*"_net_0.0.txt" coupling_strengths.txt +screen -d -m /bin/sh run + +cd ../.. diff --git a/simulation-bin/run_binary_paper3/run_nm_timing b/simulation-bin/run_binary_paper3/run_nm_timing new file mode 100644 index 0000000..fd31cee --- /dev/null +++ b/simulation-bin/run_binary_paper3/run_nm_timing @@ -0,0 +1,38 @@ +#!/bin/sh +rm -f saved_state.txt + +run_simulations_nm() +{ + # duration of neuromodulation: 30 min + ./net.out -Nl=40 -Nl_inh=20 -t_max=28815 -N_stim=4 -pc=0.1 -learn=TRIPLETf60 -recall=F100D1at28810.0 -w_ei=2 -w_ie=4.0 -w_ii=4.0 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -nm_amp=$1 -nm_begin=10.0 -nm_end=1810.0 -purpose="$2 NM beginning at 0min, recall after 8h" + #./net.out -Nl=40 -Nl_inh=20 -t_max=28815 -N_stim=4 -pc=0.1 -learn=TRIPLETf60 -recall=F100D1at28810.0 -w_ei=2 -w_ie=4.0 -w_ii=4.0 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -nm_amp=$1 -nm_begin=910.0 -nm_end=2710.0 -purpose="$2 NM beginning at 15min, recall after 8h" + ./net.out -Nl=40 -Nl_inh=20 -t_max=28815 -N_stim=4 -pc=0.1 -learn=TRIPLETf60 -recall=F100D1at28810.0 -w_ei=2 -w_ie=4.0 -w_ii=4.0 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -nm_amp=$1 -nm_begin=1810.0 -nm_end=3610.0 -purpose="$2 NM beginning at 30min, recall after 8h" + ./net.out -Nl=40 -Nl_inh=20 -t_max=28815 -N_stim=4 -pc=0.1 -learn=TRIPLETf60 -recall=F100D1at28810.0 -w_ei=2 -w_ie=4.0 -w_ii=4.0 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -nm_amp=$1 -nm_begin=3610.0 -nm_end=5410.0 -purpose="$2 NM beginning at 60min, recall after 8h" + ./net.out -Nl=40 -Nl_inh=20 -t_max=28815 -N_stim=4 -pc=0.1 -learn=TRIPLETf60 -recall=F100D1at28810.0 -w_ei=2 -w_ie=4.0 -w_ii=4.0 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -nm_amp=$1 -nm_begin=5410.0 -nm_end=7210.0 -purpose="$2 NM beginning at 90min, recall after 8h" + ./net.out -Nl=40 -Nl_inh=20 -t_max=28815 -N_stim=4 -pc=0.1 -learn=TRIPLETf60 -recall=F100D1at28810.0 -w_ei=2 -w_ie=4.0 -w_ii=4.0 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -nm_amp=$1 -nm_begin=7210.0 -nm_end=9010.0 -purpose="$2 NM beginning at 120min, recall after 8h" + ./net.out -Nl=40 -Nl_inh=20 -t_max=28815 -N_stim=4 -pc=0.1 -learn=TRIPLETf60 -recall=F100D1at28810.0 -w_ei=2 -w_ie=4.0 -w_ii=4.0 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -nm_amp=$1 -nm_begin=9010.0 -nm_end=10810.0 -purpose="$2 NM beginning at 150min, recall after 8h" + ./net.out -Nl=40 -Nl_inh=20 -t_max=28815 -N_stim=4 -pc=0.1 -learn=TRIPLETf60 -recall=F100D1at28810.0 -w_ei=2 -w_ie=4.0 -w_ii=4.0 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -nm_amp=$1 -nm_begin=10810.0 -nm_end=12610.0 -purpose="$2 NM beginning at 180min, recall after 8h" + + # duration of neuromodulation: 60 min + ./net.out -Nl=40 -Nl_inh=20 -t_max=28815 -N_stim=4 -pc=0.1 -learn=TRIPLETf60 -recall=F100D1at28810.0 -w_ei=2 -w_ie=4.0 -w_ii=4.0 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -nm_amp=$1 -nm_begin=10.0 -nm_end=3610.0 -purpose="$2 NM beginning at 0min, recall after 8h" + #./net.out -Nl=40 -Nl_inh=20 -t_max=28815 -N_stim=4 -pc=0.1 -learn=TRIPLETf60 -recall=F100D1at28810.0 -w_ei=2 -w_ie=4.0 -w_ii=4.0 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -nm_amp=$1 -nm_begin=910.0 -nm_end=4510.0 -purpose="$2 NM beginning at 15min, recall after 8h" + ./net.out -Nl=40 -Nl_inh=20 -t_max=28815 -N_stim=4 -pc=0.1 -learn=TRIPLETf60 -recall=F100D1at28810.0 -w_ei=2 -w_ie=4.0 -w_ii=4.0 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -nm_amp=$1 -nm_begin=1810.0 -nm_end=5410.0 -purpose="$2 NM beginning at 30min, recall after 8h" + ./net.out -Nl=40 -Nl_inh=20 -t_max=28815 -N_stim=4 -pc=0.1 -learn=TRIPLETf60 -recall=F100D1at28810.0 -w_ei=2 -w_ie=4.0 -w_ii=4.0 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -nm_amp=$1 -nm_begin=3610.0 -nm_end=7210.0 -purpose="$2 NM beginning at 60min, recall after 8h" + ./net.out -Nl=40 -Nl_inh=20 -t_max=28815 -N_stim=4 -pc=0.1 -learn=TRIPLETf60 -recall=F100D1at28810.0 -w_ei=2 -w_ie=4.0 -w_ii=4.0 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -nm_amp=$1 -nm_begin=5410.0 -nm_end=9010.0 -purpose="$2 NM beginning at 90min, recall after 8h" + ./net.out -Nl=40 -Nl_inh=20 -t_max=28815 -N_stim=4 -pc=0.1 -learn=TRIPLETf60 -recall=F100D1at28810.0 -w_ei=2 -w_ie=4.0 -w_ii=4.0 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -nm_amp=$1 -nm_begin=7210.0 -nm_end=10810.0 -purpose="$2 NM beginning at 120min, recall after 8h" + ./net.out -Nl=40 -Nl_inh=20 -t_max=28815 -N_stim=4 -pc=0.1 -learn=TRIPLETf60 -recall=F100D1at28810.0 -w_ei=2 -w_ie=4.0 -w_ii=4.0 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -nm_amp=$1 -nm_begin=9010.0 -nm_end=12610.0 -purpose="$2 NM beginning at 150min, recall after 8h" + ./net.out -Nl=40 -Nl_inh=20 -t_max=28815 -N_stim=4 -pc=0.1 -learn=TRIPLETf60 -recall=F100D1at28810.0 -w_ei=2 -w_ie=4.0 -w_ii=4.0 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -nm_amp=$1 -nm_begin=10810.0 -nm_end=14410.0 -purpose="$2 NM beginning at 180min, recall after 8h" + + # duration of neuromodulation: until end of simulation + ./net.out -Nl=40 -Nl_inh=20 -t_max=28815 -N_stim=4 -pc=0.1 -learn=TRIPLETf60 -recall=F100D1at28810.0 -w_ei=2 -w_ie=4.0 -w_ii=4.0 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -nm_amp=$1 -nm_begin=10.0 -nm_end=-1 -purpose="$2 NM beginning at 0min, recall after 8h" + #./net.out -Nl=40 -Nl_inh=20 -t_max=28815 -N_stim=4 -pc=0.1 -learn=TRIPLETf60 -recall=F100D1at28810.0 -w_ei=2 -w_ie=4.0 -w_ii=4.0 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -nm_amp=$1 -nm_begin=910.0 -nm_end=-1 -purpose="$2 NM beginning at 15min, recall after 8h" + ./net.out -Nl=40 -Nl_inh=20 -t_max=28815 -N_stim=4 -pc=0.1 -learn=TRIPLETf60 -recall=F100D1at28810.0 -w_ei=2 -w_ie=4.0 -w_ii=4.0 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -nm_amp=$1 -nm_begin=1810.0 -nm_end=-1 -purpose="$2 NM beginning at 30min, recall after 8h" + ./net.out -Nl=40 -Nl_inh=20 -t_max=28815 -N_stim=4 -pc=0.1 -learn=TRIPLETf60 -recall=F100D1at28810.0 -w_ei=2 -w_ie=4.0 -w_ii=4.0 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -nm_amp=$1 -nm_begin=3610.0 -nm_end=-1 -purpose="$2 NM beginning at 60min, recall after 8h" + ./net.out -Nl=40 -Nl_inh=20 -t_max=28815 -N_stim=4 -pc=0.1 -learn=TRIPLETf60 -recall=F100D1at28810.0 -w_ei=2 -w_ie=4.0 -w_ii=4.0 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -nm_amp=$1 -nm_begin=5410.0 -nm_end=-1 -purpose="$2 NM beginning at 90min, recall after 8h" + ./net.out -Nl=40 -Nl_inh=20 -t_max=28815 -N_stim=4 -pc=0.1 -learn=TRIPLETf60 -recall=F100D1at28810.0 -w_ei=2 -w_ie=4.0 -w_ii=4.0 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -nm_amp=$1 -nm_begin=7210.0 -nm_end=-1 -purpose="$2 NM beginning at 120min, recall after 8h" + ./net.out -Nl=40 -Nl_inh=20 -t_max=28815 -N_stim=4 -pc=0.1 -learn=TRIPLETf60 -recall=F100D1at28810.0 -w_ei=2 -w_ie=4.0 -w_ii=4.0 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -nm_amp=$1 -nm_begin=9010.0 -nm_end=-1 -purpose="$2 NM beginning at 150min, recall after 8h" + ./net.out -Nl=40 -Nl_inh=20 -t_max=28815 -N_stim=4 -pc=0.1 -learn=TRIPLETf60 -recall=F100D1at28810.0 -w_ei=2 -w_ie=4.0 -w_ii=4.0 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -nm_amp=$1 -nm_begin=10810.0 -nm_end=-1 -purpose="$2 NM beginning at 180min, recall after 8h" +} + +run_simulations_nm "0.06" "Low" # low neuromodulator level +run_simulations_nm "0.18" "High" # high neuromodulator level diff --git a/simulation-bin/run_binary_paper3/run_raster_10s b/simulation-bin/run_binary_paper3/run_raster_10s new file mode 100644 index 0000000..875611f --- /dev/null +++ b/simulation-bin/run_binary_paper3/run_raster_10s @@ -0,0 +1,28 @@ +#!/bin/sh +rm -f saved_state.txt + +run_simulations_nm() +{ + ./net.out -Nl=40 -Nl_inh=20 -t_max=30 -N_stim=4 -pc=0.1 -learn=TRIPLETf10 -recall=F100D1at20.0 -w_ei=2 -w_ie=4.0 -w_ii=4.0 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -nm_amp=$1 -purpose="Recall after 10s, nm=$1" + ./net.out -Nl=40 -Nl_inh=20 -t_max=30 -N_stim=4 -pc=0.1 -learn=TRIPLETf20 -recall=F100D1at20.0 -w_ei=2 -w_ie=4.0 -w_ii=4.0 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -nm_amp=$1 -purpose="Recall after 10s, nm=$1" + ./net.out -Nl=40 -Nl_inh=20 -t_max=30 -N_stim=4 -pc=0.1 -learn=TRIPLETf30 -recall=F100D1at20.0 -w_ei=2 -w_ie=4.0 -w_ii=4.0 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -nm_amp=$1 -purpose="Recall after 10s, nm=$1" + ./net.out -Nl=40 -Nl_inh=20 -t_max=30 -N_stim=4 -pc=0.1 -learn=TRIPLETf40 -recall=F100D1at20.0 -w_ei=2 -w_ie=4.0 -w_ii=4.0 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -nm_amp=$1 -purpose="Recall after 10s, nm=$1" + ./net.out -Nl=40 -Nl_inh=20 -t_max=30 -N_stim=4 -pc=0.1 -learn=TRIPLETf50 -recall=F100D1at20.0 -w_ei=2 -w_ie=4.0 -w_ii=4.0 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -nm_amp=$1 -purpose="Recall after 10s, nm=$1" + ./net.out -Nl=40 -Nl_inh=20 -t_max=30 -N_stim=4 -pc=0.1 -learn=TRIPLETf60 -recall=F100D1at20.0 -w_ei=2 -w_ie=4.0 -w_ii=4.0 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -nm_amp=$1 -purpose="Recall after 10s, nm=$1" + ./net.out -Nl=40 -Nl_inh=20 -t_max=30 -N_stim=4 -pc=0.1 -learn=TRIPLETf70 -recall=F100D1at20.0 -w_ei=2 -w_ie=4.0 -w_ii=4.0 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -nm_amp=$1 -purpose="Recall after 10s, nm=$1" + ./net.out -Nl=40 -Nl_inh=20 -t_max=30 -N_stim=4 -pc=0.1 -learn=TRIPLETf80 -recall=F100D1at20.0 -w_ei=2 -w_ie=4.0 -w_ii=4.0 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -nm_amp=$1 -purpose="Recall after 10s, nm=$1" + ./net.out -Nl=40 -Nl_inh=20 -t_max=30 -N_stim=4 -pc=0.1 -learn=TRIPLETf90 -recall=F100D1at20.0 -w_ei=2 -w_ie=4.0 -w_ii=4.0 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -nm_amp=$1 -purpose="Recall after 10s, nm=$1" + ./net.out -Nl=40 -Nl_inh=20 -t_max=30 -N_stim=4 -pc=0.1 -learn=TRIPLETf100 -recall=F100D1at20.0 -w_ei=2 -w_ie=4.0 -w_ii=4.0 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -nm_amp=$1 -purpose="Recall after 10s, nm=$1" +} + +run_simulations_nm "0.00" +run_simulations_nm "0.02" +run_simulations_nm "0.04" +run_simulations_nm "0.06" +run_simulations_nm "0.08" +run_simulations_nm "0.10" +run_simulations_nm "0.12" +run_simulations_nm "0.14" +run_simulations_nm "0.16" +run_simulations_nm "0.18" + diff --git a/simulation-bin/run_binary_paper3/run_raster_8h b/simulation-bin/run_binary_paper3/run_raster_8h new file mode 100644 index 0000000..56a7923 --- /dev/null +++ b/simulation-bin/run_binary_paper3/run_raster_8h @@ -0,0 +1,27 @@ +#!/bin/sh +rm -f saved_state.txt + +run_simulations_nm() +{ + ./net.out -Nl=40 -Nl_inh=20 -t_max=28820 -N_stim=4 -pc=0.1 -learn=TRIPLETf10 -recall=F100D1at28810.0 -w_ei=2 -w_ie=4.0 -w_ii=4.0 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -nm_amp=$1 -purpose="Recall after 8h, nm=$1" + ./net.out -Nl=40 -Nl_inh=20 -t_max=28820 -N_stim=4 -pc=0.1 -learn=TRIPLETf20 -recall=F100D1at28810.0 -w_ei=2 -w_ie=4.0 -w_ii=4.0 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -nm_amp=$1 -purpose="Recall after 8h, nm=$1" + ./net.out -Nl=40 -Nl_inh=20 -t_max=28820 -N_stim=4 -pc=0.1 -learn=TRIPLETf30 -recall=F100D1at28810.0 -w_ei=2 -w_ie=4.0 -w_ii=4.0 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -nm_amp=$1 -purpose="Recall after 8h, nm=$1" + ./net.out -Nl=40 -Nl_inh=20 -t_max=28820 -N_stim=4 -pc=0.1 -learn=TRIPLETf40 -recall=F100D1at28810.0 -w_ei=2 -w_ie=4.0 -w_ii=4.0 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -nm_amp=$1 -purpose="Recall after 8h, nm=$1" + ./net.out -Nl=40 -Nl_inh=20 -t_max=28820 -N_stim=4 -pc=0.1 -learn=TRIPLETf50 -recall=F100D1at28810.0 -w_ei=2 -w_ie=4.0 -w_ii=4.0 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -nm_amp=$1 -purpose="Recall after 8h, nm=$1" + ./net.out -Nl=40 -Nl_inh=20 -t_max=28820 -N_stim=4 -pc=0.1 -learn=TRIPLETf60 -recall=F100D1at28810.0 -w_ei=2 -w_ie=4.0 -w_ii=4.0 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -nm_amp=$1 -purpose="Recall after 8h, nm=$1" + ./net.out -Nl=40 -Nl_inh=20 -t_max=28820 -N_stim=4 -pc=0.1 -learn=TRIPLETf70 -recall=F100D1at28810.0 -w_ei=2 -w_ie=4.0 -w_ii=4.0 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -nm_amp=$1 -purpose="Recall after 8h, nm=$1" + ./net.out -Nl=40 -Nl_inh=20 -t_max=28820 -N_stim=4 -pc=0.1 -learn=TRIPLETf80 -recall=F100D1at28810.0 -w_ei=2 -w_ie=4.0 -w_ii=4.0 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -nm_amp=$1 -purpose="Recall after 8h, nm=$1" + ./net.out -Nl=40 -Nl_inh=20 -t_max=28820 -N_stim=4 -pc=0.1 -learn=TRIPLETf90 -recall=F100D1at28810.0 -w_ei=2 -w_ie=4.0 -w_ii=4.0 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -nm_amp=$1 -purpose="Recall after 8h, nm=$1" + ./net.out -Nl=40 -Nl_inh=20 -t_max=28820 -N_stim=4 -pc=0.1 -learn=TRIPLETf100 -recall=F100D1at28810.0 -w_ei=2 -w_ie=4.0 -w_ii=4.0 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -nm_amp=$1 -purpose="Recall after 8h, nm=$1" +} + +run_simulations_nm "0.00" +run_simulations_nm "0.02" +run_simulations_nm "0.04" +run_simulations_nm "0.06" +run_simulations_nm "0.08" +run_simulations_nm "0.10" +run_simulations_nm "0.12" +run_simulations_nm "0.14" +run_simulations_nm "0.16" +run_simulations_nm "0.18" diff --git a/simulation-code/Definitions.hpp b/simulation-code/Definitions.hpp index e2718e2..3e872a7 100644 --- a/simulation-code/Definitions.hpp +++ b/simulation-code/Definitions.hpp @@ -10,6 +10,7 @@ #define EPSILON 10e-12 // very small number that is counted as zero #define ON 1 // to switch on a function #define OFF 0 // to switch off a function +#define NAN std::numeric_limits::quiet_NaN() // STIM_TYPE #define POISSON_STIMULATION 1 // enables stimulation with Poisson-like spiking @@ -64,8 +65,17 @@ #define OVERLAP20_3RD_NO_BC_NO_ABC 18 // use a third block of neurons as the assembly, overlapping by 20% with the second assembly exclusively #define RAND 19 // use randomly selected neurons as the assembly -// PLASTICITY +// PLASTICITY MECHANISMS #define CALCIUM 1 // use the calcium model as plasticity mechanism (early phase only) #define CALCIUM_AND_STC 2 // use the calcium model with synaptic and capture as plasticity mechanism #define STDP 3 // use an STDP rule as plasticity mechanism (early phase only) #define STDP_AND_STC 4 // use an STDP rule with synaptic and capture as plasticity mechanism + +// PLASTICITY THRESHOLDS +#define THRP_P 1 // for LTP +#define THRP_D 2 // for LTD +#define THRP_C 3 // for both LTP and LTD +#define THRW_CA 1 // early-phase calcium threshold +#define THRW_TAG 2 // tagging threshold +#define THRW_PRO 3 // protein synthesis threshold + diff --git a/simulation-code/Network.cpp b/simulation-code/Network.cpp index 8c5f125..11e8e2c 100644 --- a/simulation-code/Network.cpp +++ b/simulation-code/Network.cpp @@ -8,15 +8,15 @@ #include #include -using namespace std; - #include "Neuron.cpp" -struct synapse // structure for synapse definition +/*** Synapse structure *** + * defines a synapse by specifying pre- and postsynaptic neuron */ +struct Synapse { int presyn_neuron; // the number of the presynaptic neuron int postsyn_neuron; // the number of the postsynaptic neuron - synapse(int _presyn_neuron, int _postsyn_neuron) // constructor + Synapse(int _presyn_neuron, int _postsyn_neuron) // constructor { presyn_neuron = _presyn_neuron; postsyn_neuron = _postsyn_neuron; @@ -76,9 +76,9 @@ double Ca_post; // increase in calcium current evoked by postsynaptic spike double tau_Ca; // s, time constant for calcium dynamics double tau_Ca_steps; // time constant for calcium dynamics in timesteps double tau_h; // s, time constant for early-phase plasticity -double tau_pp; // h, time constant of LTP-related protein synthesis -double tau_pc; // h, time constant of common protein synthesis -double tau_pd; // h, time constant of LTD-related protein synthesis +double tau_p_p; // min, time constant of LTP-related protein synthesis +double tau_p_c; // min, time constant of common protein synthesis +double tau_p_d; // min, time constant of LTD-related protein synthesis double tau_z; // min, time constant of consolidation double gamma_p; // constant for potentiation process double gamma_d; // constant for depression process @@ -96,7 +96,14 @@ double theta_tag_p; // mV, threshold for LTP-related tag double theta_tag_d; // mV, threshold for LTD-related tag double z_max; // upper z bound +/*** Neuromodulation parameters ***/ +double nm_paradigm_index; // index of the paradigm for modeling the neuromodulator concentration and protein synthesis threshold +double nm_amp; // amplitude of the neuromodulator concentration +Stimulus nm_protocol; // neuromodulation protocol modeled by a Stimulus object +double (Network::*psth_c_function_ptr)(double) const; // pointer to the function used to compute the protein synthesis threshold from the neuromodulator concentration + public: + #ifdef TWO_NEURONS_ONE_SYNAPSE bool tag_glob; // specifies if a synapse was tagged ever bool ps_glob; // specifies if protein synthesis ever occurred in any neuron @@ -154,7 +161,6 @@ int tb_max_sum_diff_d; // time bin at which max_sum_diff_d was encountered * - int n: the consecutive element number */ #define symm(n) (cNN(col(n),row(n))) - /*** shallBeConnected *** * Draws a uniformly distributed random number from the interval 0.0 to 1.0 and returns, * * depending on the connection probability, whether or not a connection shall be established * @@ -266,9 +272,9 @@ void saveNetworkParams(ofstream *f) const *f << "Ca_post = " << Ca_post << endl; *f << "tau_Ca = " << tau_Ca << " s" << endl; *f << "tau_h = " << tau_h << " s" << endl; - *f << "tau_pp = " << tau_pp << " h" << endl; - *f << "tau_pc = " << tau_pc << " h" << endl; - *f << "tau_pd = " << tau_pd << " h" << endl; + *f << "tau_p_p = " << tau_p_p << " min" << endl; + *f << "tau_p_c = " << tau_p_c << " min" << endl; + *f << "tau_p_d = " << tau_p_d << " min" << endl; *f << "tau_z = " << tau_z << " min" << endl; *f << "z_max = " << z_max << endl; *f << "gamma_p = " << gamma_p << endl; @@ -283,13 +289,35 @@ void saveNetworkParams(ofstream *f) const *f << "alpha_c = " << alpha_c << endl; *f << "alpha_d = " << alpha_d << endl; - double nm = 1. / (theta_pro_c/h_0) - 0.001; // compute neuromodulator concentration from threshold theta_pro_c + //double nm_amp = 1. / (theta_pro_c/h_0) - 0.001; // compute amplitude of neuromodulator concentration from threshold theta_pro_c *f << "theta_pro_p = " << dtos(theta_pro_p/h_0,2) << " h_0" << endl; - *f << "theta_pro_c = " << dtos(theta_pro_c/h_0,2) << " h_0 (nm = " << dtos(nm,2) << ")" << endl; + //*f << "theta_pro_c = " << dtos(theta_pro_c/h_0,2) << " h_0 (nm = " << dtos(nm_amp,2) << ")" << endl; + *f << "theta_pro_c = " << dtos(theta_pro_c/h_0,2) << " h_0" << endl; *f << "theta_pro_d = " << dtos(theta_pro_d/h_0,2) << " h_0" << endl; *f << "theta_tag_p = " << dtos(theta_tag_p/h_0,2) << " h_0" << endl; *f << "theta_tag_d = " << dtos(theta_tag_d/h_0,2) << " h_0" << endl; + // parameters of NeuromodulatorFunctions object + *f << endl; + *f << "Neuromodulation parameters:" << endl; + *f << "paradigm = " << nm_paradigm_index << endl; + *f << "nm_amp = " << dtos(nm_amp,2); +#if PROTEIN_POOLS == POOLS_C || PROTEIN_POOLS == POOLS_PCD + if (nm_paradigm_index >= 0) + *f << " (equals theta_pro_c = " << dtos((this->*psth_c_function_ptr)(nm_amp)/h_0,2) << " h_0)"; +#endif + *f << endl; + if (nm_protocol.getStimulationDuration() > 0) + { + *f << "nm_begin = " << dtos(nm_protocol.getStimulationStart()*dt,1) << " s (" << nm_protocol.getStimulationStart() << " steps)" << endl; + *f << "nm_end = " << dtos(nm_protocol.getStimulationEnd()*dt,1) << " s (" << nm_protocol.getStimulationEnd() << " steps)" << endl; + } + else + { + *f << "nm_begin = " << NAN << endl; + *f << "nm_end = " << NAN << endl; + } + neurons[0].saveNeuronParams(f); // all neurons have the same parameters, take the first one } @@ -359,6 +387,17 @@ template void serialize(Archive &ar, const unsigned int version) } } +#if PROTEIN_POOLS == POOLS_C || PROTEIN_POOLS == POOLS_PCD +/*** getThetaProC *** + * Returns the threshold value for protein synthesis (for common pool) as a function of neuromodulator concentration * + * (cf. Clopath et al., 2008) + * - double nm: neuromodulator concentration in arbitrary units * + * - return: threshold value in mV */ +inline double getThetaProC_Clopath2008(double nm) const +{ + return 1. / (nm + 0.001) * h_0; // compute protein synthesis threshold from neuromodulator concentration +} +#endif /*** processTimeStep *** * Processes one timestep (of duration dt) for the network [rich mode / compmode == 1] * @@ -373,6 +412,11 @@ int processTimeStep(int tb, ofstream* txt_spike_raster = NULL) bool STC = false; // specifies if at least one synapse is tagged and receives proteins bool ps_neuron = false; // specifies if at least one neuron is exhibiting protein synthesis +#if PROTEIN_POOLS == POOLS_C || PROTEIN_POOLS == POOLS_PCD + if (nm_protocol.isSet()) // if neuromodulation protocol is set (otherwise using defualt 'theta_pro_c' value) + theta_pro_c = (this->*psth_c_function_ptr)(nm_protocol.get(tb)); // compute protein synthesis threshold from neuromodulator amount +#endif + /*******************************************************/ // compute neuronal dynamics for (int m=0; m= theta_pro_p) || (sum_h_diff_d[m] >= theta_pro_d); - pa_p = pa_p * exp(-dt/(tau_pp * 3600.)) + alpha_p * step(sum_h_diff_p[m] - theta_pro_p) * (1. - exp(-dt/(tau_pp * 3600.))); - pa_d = pa_d * exp(-dt/(tau_pd * 3600.)) + alpha_d * step(sum_h_diff_d[m] - theta_pro_d) * (1. - exp(-dt/(tau_pd * 3600.))); - // [simple Euler: pa += (- pa + alpha * step(sum_h_diff - theta_pro)) * (dt / (tau_p * 3600.));] + pa_p = pa_p * exp(-dt/(tau_p_p * 60.)) + alpha_p * step(sum_h_diff_p[m] - theta_pro_p) * (1. - exp(-dt/(tau_p_p * 60.))); + pa_d = pa_d * exp(-dt/(tau_p_d * 60.)) + alpha_d * step(sum_h_diff_d[m] - theta_pro_d) * (1. - exp(-dt/(tau_p_d * 60.))); + // [simple Euler: pa += (- pa + alpha * step(sum_h_diff - theta_pro)) * (dt / (tau_p * 60.));] sum_h_diff_p[m] = 0.; sum_h_diff_d[m] = 0.; @@ -457,7 +501,7 @@ int processTimeStep(int tb, ofstream* txt_spike_raster = NULL) ps_neuron = ps_neuron || (sum_h_diff[m] >= theta_pro_c); - pa_c = pa_c * exp(-dt/(tau_pc * 3600.)) + alpha_c * step(sum_h_diff[m] - theta_pro_c) * (1. - exp(-dt/(tau_pc * 3600.))); // === ESSENTIAL === + pa_c = pa_c * exp(-dt/(tau_p_c * 60.)) + alpha_c * step(sum_h_diff[m] - theta_pro_c) * (1. - exp(-dt/(tau_p_c * 60.))); // === ESSENTIAL === sum_h_diff[m] = 0.; #else @@ -587,7 +631,7 @@ int processTimeStep(int tb, ofstream* txt_spike_raster = NULL) double C = 0.1 + gamma_d; double hexp = exp(-dt*C/tau_h); h[m][n] = h[m][n] * hexp + (0.1*h_0 + noise) / C * (1.- hexp); - // [simple Euler: h[m][n] += ((0.1 * (h_0 - h[m][n]) + gamma_d * h[m][n] + noise)*(dt/tau_h));] + // [simple Euler: h[m][n] += ((0.1 * (h_0 - h[m][n]) - gamma_d * h[m][n] + noise)*(dt/tau_h));] if (abs(h[m][n] - h_0) > abs(max_dev)) { @@ -746,6 +790,11 @@ int processTimeStep_FF(int tb, double delta_t, ofstream* logf) bool STC = false; // specifies if at least one synapse is tagged and receives proteins bool ps_neuron = false; // specifies if at least one neuron is exhibiting protein synthesis +#if PROTEIN_POOLS == POOLS_C || PROTEIN_POOLS == POOLS_PCD + if (nm_protocol.isSet()) // if neuromodulation protocol is set (otherwise using defualt 'theta_pro_c' value) + theta_pro_c = (this->*psth_c_function_ptr)(nm_protocol.get(tb)); // compute protein synthesis threshold from neuromodulator amount +#endif + /*******************************************************/ // compute neuronal dynamics for (int m=0; m getProteinSynthesisEnd() /*** getThreshold *** * Returns a specified threshold value (for tag or protein synthesis) * - * - plast: the type of plasticity (1: LTP, 2: LTD) - * - which: the type of threshold (1: early-phase calcium treshold, 2: tagging threshold, 3: protein synthesis threshold) + * - plast: the type of plasticity (LTP, LTD, or common) + * - which: the type of threshold (early-phase calcium threshold, tagging threshold, or protein synthesis threshold) * - return: the threshold value */ double getThreshold(int plast, int which) { - if (plast == 1) // LTP + if (plast == THRP_P) // LTP { - if (which == 1) // early-phase calcium treshold - return theta_p; - else if (which == 2) // tagging threshold - return theta_tag_p; - else // protein synthesis threshold - return theta_pro_p; + switch (which) + { + case THRW_CA: // early-phase calcium threshold + return theta_p; + case THRW_TAG: // tagging threshold + return theta_tag_p; + case THRW_PRO: // protein synthesis threshold + return theta_pro_p; + } } - else // LTD + else if (plast == THRP_D) // LTD { - if (which == 1) // early-phase calcium treshold - return theta_d; - else if (which == 2) // tagging threshold - return theta_tag_d; - else // protein synthesis threshold - return theta_pro_d; + switch (which) + { + case THRW_CA: // early-phase calcium threshold + return theta_d; + case THRW_TAG: // tagging threshold + return theta_tag_d; + case THRW_PRO: // protein synthesis threshold + return theta_pro_d; + } } + else if (plast == THRP_C) // common + { + switch (which) + { + case THRW_PRO: // protein synthesis threshold + return theta_pro_c; + } + } + return -1.; } @@ -1324,6 +1388,8 @@ void setCouplingStrengths(double _w_ee, double _w_ei, double _w_ie, double _w_ii w_ei = _w_ei * h_0; w_ie = _w_ie * h_0; w_ii = _w_ii * h_0; + + //cout << "w_ei=" << w_ei << ", w_ie=" << w_ie << ", w_ii=" << w_ii << endl; } /*** getInitialWeight *** @@ -1335,27 +1401,27 @@ double getInitialWeight() /*** getSynapticCalcium *** * Returns the calcium amount at a given synapse * - * - synapse s: structure specifying pre- and postsynaptic neuron * + * - Synapse s: structure specifying pre- and postsynaptic neuron * * - return: the synaptic calcium amount */ -double getSynapticCalcium(synapse s) const +double getSynapticCalcium(Synapse s) const { return Ca[s.presyn_neuron][s.postsyn_neuron]; } /*** getEarlySynapticStrength *** * Returns the early-phase synaptic strength at a given synapse * - * - synapse s: structure specifying pre- and postsynaptic neuron * + * - Synapse s: structure specifying pre- and postsynaptic neuron * * - return: the early-phase synaptic strength */ -double getEarlySynapticStrength(synapse s) const +double getEarlySynapticStrength(Synapse s) const { return h[s.presyn_neuron][s.postsyn_neuron]; } /*** getLateSynapticStrength *** * Returns the late-phase synaptic strength at a given synapse * - * - synapse s: structure specifying pre- and postsynaptic neuron * + * - Synapse s: structure specifying pre- and postsynaptic neuron * * - return: the late-phase synaptic strength */ -double getLateSynapticStrength(synapse s) const +double getLateSynapticStrength(Synapse s) const { return z[s.presyn_neuron][s.postsyn_neuron]; } @@ -1867,8 +1933,8 @@ void resetPlasticity(bool early_phase, bool late_phase, bool calcium, bool prote * Resets the network and all neurons to initial state (but maintain connectivity) */ void reset() { -#ifdef TWO_NEURONS_ONE_SYNAPSE_MIN - rg.seed(0.); // determinsitic computation +#if defined TWO_NEURONS_ONE_SYNAPSE_MIN || defined TWO_NEURONS_ONE_SYNAPSE_CONV + rg.seed(0.); // deterministic computation #else rg.seed(getClockSeed()); // set new random seed by clock's epoch #endif @@ -1950,6 +2016,44 @@ void setPSThresholds(double _theta_pro_P, double _theta_pro_C, double _theta_pro theta_pro_d = _theta_pro_D*h_0; } +/*** setNeuromodulationParameters *** + * Set parameters for the neuromodulation protocol * + * - int _nm_paradigm_index: specifies which neuromodulator paradigm is to be used (for negative value, removes the neuromodulation) * + * - double _nm_amp [optional]: the amplitude of the neuromodulation (exact meaning depends on the paradigm) * + * - double _nm_begin [optional]: the beginning of neuromodulation * + * - double _nm_end [optional]: the end of neuromodulation */ +void setNeuromodulationParameters(int _nm_paradigm_index, double _nm_amp = NAN, double _nm_begin = NAN, double _nm_end = NAN) +{ + if (_nm_paradigm_index < 0) + { + nm_protocol.clear(); + } +#if PROTEIN_POOLS == POOLS_C || PROTEIN_POOLS == POOLS_PCD + else if (_nm_paradigm_index == 0) + { + int index = nm_protocol.addStimulationInterval(int(round(_nm_begin/dt)), int(round(_nm_end/dt))); // add interval with start and end time of neuromodulation + + nm_protocol.setDetStimulation(10, index); // set deterministic neuromodulation (with period duration of ten timesteps) + nm_protocol.addRectPulse(_nm_amp, 0, 10, index); // add rectangular pulse lasting for ten timesteps + //nm_protocol.plotAll("nm_protocol"); + + psth_c_function_ptr = &Network::getThetaProC_Clopath2008; // set pointer to function computing the protein synthesis threshold from the neuromodulator amount + } +#endif + /*else if (_nm_paradigm_index == 1) // TODO: other paradigms + { + + }*/ + else + { + cout << "WARNING: specified neuromodulation function " << _nm_paradigm_index << " is not implemented!" << endl; + return; + } + + nm_paradigm_index = _nm_paradigm_index; + nm_amp = _nm_amp; +} + /*** Constructor *** * Sets all parameters, creates neurons and synapses * * --> it is required to call setSynTimeConstant and setCouplingStrengths immediately * @@ -1961,10 +2065,9 @@ void setPSThresholds(double _theta_pro_P, double _theta_pro_C, double _theta_pro * - double _p_c: connection probability * * - double _sigma_plasticity: standard deviation of the plasticity * * - double _z_max: the upper z bound */ - Network(const double _dt, const int _Nl_exc, const int _Nl_inh, double _p_c, double _sigma_plasticity, double _z_max) : - dt(_dt), u_dist(0.0,1.0), norm_dist(0.0,1.0), Nl_exc(_Nl_exc), Nl_inh(_Nl_inh), z_max(_z_max), -#ifdef TWO_NEURONS_ONE_SYNAPSE_MIN + dt(_dt), u_dist(0.0,1.0), norm_dist(0.0,1.0), Nl_exc(_Nl_exc), Nl_inh(_Nl_inh), z_max(_z_max), nm_protocol(_dt), +#if defined TWO_NEURONS_ONE_SYNAPSE_MIN || defined TWO_NEURONS_ONE_SYNAPSE_CONV rg(0.) #else rg(getClockSeed()) @@ -1997,9 +2100,9 @@ Network(const double _dt, const int _Nl_exc, const int _Nl_inh, double _p_c, dou sigma_plasticity = _sigma_plasticity; // from Graupner and Brunel (2012) but corrected by 1/sqrt(1000) // Biophysical parameters for protein synthesis and late phase - tau_pp = 1.0; // from Li et al. (2016) - tau_pc = 1.0; // from Li et al. (2016) - tau_pd = 1.0; // from Li et al. (2016) + tau_p_p = 60.0; // from Li et al. (2016) + tau_p_c = 60.0; // from Li et al. (2016) + tau_p_d = 60.0; // from Li et al. (2016) alpha_p = 1.0; // from Li et al. (2016) alpha_c = 1.0; // from Li et al. (2016) alpha_d = 1.0; // from Li et al. (2016) @@ -2009,6 +2112,7 @@ Network(const double _dt, const int _Nl_exc, const int _Nl_inh, double _p_c, dou theta_pro_d = 0.5*h_0; // theta_tag_p = 0.2*h_0; // from Li et al. (2016) theta_tag_d = 0.2*h_0; // from Li et al. (2016) + setNeuromodulationParameters(-1); // Create neurons and synapse matrices neurons = vector (N, Neuron(_dt)); @@ -2045,11 +2149,11 @@ Network(const double _dt, const int _Nl_exc, const int _Nl_inh, double _p_c, dou } } -#ifdef TWO_NEURONS_ONE_SYNAPSE +#if defined TWO_NEURONS_ONE_SYNAPSE_LI2016 || defined TWO_NEURONS_ONE_SYNAPSE_P1 neurons[1].setPoisson(true); - #ifdef PLASTICITY_OVER_FREQ +#elif defined PLASTICITY_OVER_FREQ neurons[0].setPoisson(true); - #endif + neurons[1].setPoisson(true); #endif } diff --git a/simulation-code/NetworkMain.cpp b/simulation-code/NetworkMain.cpp index f20f06f..d7a135a 100644 --- a/simulation-code/NetworkMain.cpp +++ b/simulation-code/NetworkMain.cpp @@ -30,6 +30,9 @@ double tau_syn = 0.005; // s, synaptic time constant double I_0 = 0.15; // nA, mean current applied to the individual neurons (e.g., mean of the OU process) double sigma_WN = 0.05; // nA s^1/2, standard deviation of the input current (e.g., std. dev. of the Gaussian white noise driving the OU process) +double recall_fraction = 0.5; // the fraction of neurons in the cell assembly that is stimulated to trigger recall +int N_stim = 25; // number of hypothetical synapses used for stimulation + double theta_p = 3.0; // the calcium threshold for early-phase potentiation double theta_d = 1.2; // the calcium threshold for early-phase depression double Ca_pre = 0.6; // the postsynaptic calcium contribution evoked by a presynaptic spike (Li et al., 2016, not adjusted: 1.0) @@ -39,13 +42,20 @@ double sigma_plasticity = 9.1844 / sqrt(1000) * 10.; // mV, standard deviation o double theta_pro_P = 0.5; // protein synthesis threshold for pool P in units of h_0 double theta_pro_C = 0.5; // protein synthesis threshold for pool C in units of h_0 double theta_pro_D = 0.5; // protein synthesis threshold for pool D in units of h_0 -double z_max = 1.; // the upper late-phase bound -double recall_fraction = 0.5; // the fraction of neurons in the cell assembly that is stimulated to trigger recall -int N_stim = 25; // number of hypothetical synapses used for stimulation +#ifdef MEMORY_CONSOLIDATION_NM_STC_P3 +int nm_paradigm_index = 0; // defines the paradigm for modeling the neuromodulator concentration and protein synthesis threshold +#else +int nm_paradigm_index = -1; +#endif +double nm_amp = 2.; // amplitude of the neuromodulator concentration (exact meaning depends on the paradigm) +double nm_begin = 0; // s, defines the begin time for the neuromodulation protocol +double nm_end = -1.; // s, defines the end time for the neuromodulation protocol + +double z_max = 1.; // the upper late-phase bound -double oscill_inp_mean = 0.; // nA, -double oscill_inp_amp = 0.; // nA, +double oscill_inp_mean = 0.; // nA, the mean value of oscillatory input current +double oscill_inp_amp = 0.; // nA, the amplitude of oscillatory input current bool ff_enabled = true; // specifies if fast-forward mode (no computation of spiking dynamics) is allowed int output_period = 10; // in general, every "output_period-th" timestep, data will be recorded for plotting @@ -120,8 +130,8 @@ int main(int argc, char** argv) w_ii = read; else if (strstr(argv[i], "-z_max=") == argv[i] || strstr(argv[i], "-zmax=") == argv[i]) z_max = read; - else if (strstr(argv[i], "-nm=") == argv[i]) - theta_pro_C = 1. / (read + 0.001); // compute threshold theta_pro_C (in units of h_0) from neuromodulator concentration + else if (strstr(argv[i], "-nm_amp=") == argv[i] || strstr(argv[i], "-nm=") == argv[i]) + nm_amp = read; else if (strstr(argv[i], "-theta_pro_C=") == argv[i] || strstr(argv[i], "-theta_pro=") == argv[i]) theta_pro_C = read; else if (strstr(argv[i], "-theta_pro_P=") == argv[i]) @@ -164,6 +174,12 @@ int main(int argc, char** argv) Ca_post = read; else if (strstr(argv[i], "-r=") == argv[i]) recall_fraction = read; + else if (strstr(argv[i], "-nm_paradigm=") == argv[i]) + nm_paradigm_index = (int)read; + else if (strstr(argv[i], "-nm_begin=") == argv[i]) + nm_begin = read; + else if (strstr(argv[i], "-nm_end=") == argv[i]) + nm_end = read; // specifying parameter file (all parameters passed previously will be overwritten) //else if (strstr(argv[i], "-F") == argv[i] && (i+1) < argc && strstr(argv[i+1], "-") != argv[i]) @@ -187,7 +203,7 @@ int main(int argc, char** argv) cout << "TWO_NEURONS_ONE_SYNAPSE*" << endl; #endif - #if defined TWO_NEURONS_ONE_SYNAPSE_MIN + #if defined TWO_NEURONS_ONE_SYNAPSE_MIN || defined TWO_NEURONS_ONE_SYNAPSE_CONV output_period = 1; #else output_period = 100; @@ -196,12 +212,13 @@ int main(int argc, char** argv) Nl_exc = 2; Nl_inh = 0; + #if !defined TWO_NEURONS_ONE_SYNAPSE_CONV I_0 = 0.; + #endif sigma_WN = 0.; N_stim = 1; #elif defined SEEK_I_0 purpose = "seek"; - purpose_set = true; output_period = 1000; #endif // Create working directory and files @@ -218,12 +235,17 @@ int main(int argc, char** argv) extractFileFromBinary("code.zip", data, data_end); // extracts code.zip out of the binary extractFileFromBinary("plotFunctions.py", data2, data2_end); // extracts plotFunctions.py out of the binary writeRunFile(string("run"), argc, argv); // writes the command line that was used to run this program in a file - + if (nm_end < -EPSILON) + { + nm_end = t_max; + cout << "End time of neuromodulation set to end time of the simulation." << endl; + } + // Create NetworkSimulation object and set computational parameters NetworkSimulation sn = NetworkSimulation(Nl_exc, Nl_inh, dt, t_max, p_c, sigma_plasticity, z_max, t_wfr, ff_enabled); // Seeking I_0 -#ifdef SEEK_I_0 //---------------------------------------------------------------------- +#ifdef SEEK_I_0 //----------------------------------------------------------------------- ofstream seekICData(dateStr("_I_0_nu.txt")); ofstream seekICResult(dateStr("_I_0.txt")); @@ -235,21 +257,19 @@ int main(int argc, char** argv) sn.setSeekICVar(&seekICVar); // pass address of seekICVar to NetworkSimulation object -#endif //-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~- - + while(true) // is cancelled by "break" once target firing rate has been found + { +#endif //-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~ + sn.setParams(I_0, sigma_WN, tau_syn, w_ee, w_ei, w_ie, w_ii, oscill_inp_mean, oscill_inp_amp, - prot_learn, prot_recall, output_period, N_stim, theta_p, theta_d, Ca_pre, Ca_post, theta_pro_P, theta_pro_C, theta_pro_D, recall_fraction); + prot_learn, prot_recall, N_stim, recall_fraction, theta_p, theta_d, Ca_pre, Ca_post, + theta_pro_P, theta_pro_C, theta_pro_D, nm_paradigm_index, nm_amp, nm_begin, nm_end, output_period); // set main parameters for the simulation -#ifndef SEEK_I_0 +#ifndef SEEK_I_0 //---------------------------------------------------------------------- sn.simulate(path, true, purpose); // run the actual simulation -#else //------------------------------------------------------------------------------------ - - while(true) // is cancelled by "break" once target firing rate has been found - { - sn.setParams(I_0, sigma_WN, tau_syn, w_ee, w_ei, w_ie, w_ii, oscill_inp_mean, oscill_inp_amp, - prot_learn, prot_recall, output_period, N_stim, theta_p, theta_d, Ca_pre, Ca_post, theta_pro_P, theta_pro_C, theta_pro_D, recall_fraction); // TODO: remove syntactic redundance +#else //-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~- sn.simulate(path, (I_0 == I_0_start) ? true : false, purpose); // run the actual simulation diff --git a/simulation-code/NetworkSimulation.cpp b/simulation-code/NetworkSimulation.cpp index 7760191..9a2dc11 100644 --- a/simulation-code/NetworkSimulation.cpp +++ b/simulation-code/NetworkSimulation.cpp @@ -13,15 +13,13 @@ #include // for mkdir() #include "Definitions.hpp" -using namespace std; - // Simulation options (cf. Definitions.hpp) #define STIM_TYPE OU_STIMULATION // the type of stimulation that enters the neurons (not to be confused with the stimulus protocol) #define NEURON_MODEL LIF // definition of the neuron model #define SYNAPSE_MODEL MONOEXP // the synapse model that is used #define PLASTICITY CALCIUM_AND_STC // defines what type of plasticity shall be used (or OFF) #define RAND_INIT_WEIGHTS OFF // specifies whether initial early-phase weight shall be randomly (log-normally) distributed or not -#define PROTEIN_POOLS POOLS_C // the protein pool setting for synaptic consolidation/plasticity-related proteins +#define PROTEIN_POOLS POOLS_C // the pools of plasticity-related proteins for late-phase plasticity #define STIPULATE_CA OFF // if ON: stipulate a cell assembly with strong interconnections at the beginning of the learning stimulus, no actual learning is required #define CORE_SHAPE FIRST // shape and position of the cell assembly #define CORE_SIZE 150 // size of the cell assembly @@ -38,11 +36,14 @@ using namespace std; // Output options (cf. Definitions.hpp) #define SPIKE_PLOTTING NUMBER_AND_RASTER // defines what information about spiking dynamics is saved and plotted #define PRINT_CONNECTIONS ON // if ON: output of connectivity matrix of excitatory subnetwork using Network::printConnections() +#define PRINT_LEARN_STIMULUS OFF // if ON: output of learning stimulus using Stimulus::plotAll() #define STIM_PREPROC OFF // defines if stimuli are pre-processed (at cost of memory, can speed up, but can also slow down runtime!) #define NET_OUTPUT_PRECISION 7 // precision of numbers (also of h_0) used for network plots #define OUTPUT_PRECISION 9 // precision of numbers used for plots over time #define SAVE_NET_STATE ON // if ON: output of whole simulation data before recall (files can be several hundreds of megabytes large) +using namespace std; + #include "SpecialCases.hpp" #include "Tools.cpp" #include "Network.cpp" @@ -82,7 +83,7 @@ double oscill_inp_amp; // nA, amplitude of sinusoidal oscillatory input to excit /*** Output parameters ***/ vector exc_neuron_output {}; vector inh_neuron_output {}; -vector synapse_output {}; +vector synapse_output {}; int output_period; // number of timesteps to pass for the next data output (if set to 1, most detailed output is obtained) vector net_output {}; // vector of times selected for the output of network plots (mind the larger timesteps in FF mode!) @@ -311,42 +312,54 @@ int simulate(string working_dir, bool first_sim, string _purpose) #ifdef TWO_NEURONS_ONE_SYNAPSE exc_neuron_output = vector {0,1}; inh_neuron_output = vector {}; - synapse_output = vector {synapse(1,0)}; + synapse_output = vector {Synapse(1,0)}; #elif defined SEEK_I_0 exc_neuron_output = vector {}; inh_neuron_output = vector {}; #else - if (Nl_exc == 20) + if (Nl_exc == 2) { - exc_neuron_output = vector {0,1,2}; - inh_neuron_output = vector {400}; - synapse_output = vector {synapse(0,1),synapse(0,50),synapse(0,100),synapse(400,0)}; +#if defined MAX_ACTIVITY_NEURON // in the case of "small net" + exc_neuron_output = vector {0,1,2,3}; + inh_neuron_output = vector {}; + synapse_output = vector {Synapse(0,1),Synapse(0,2),Synapse(0,3),Synapse(2,3),Synapse(3,2)}; +#endif } else if (Nl_exc == 35) { //exc_neuron_output = vector {608,609,610}; exc_neuron_output = vector {0,1,2}; inh_neuron_output = vector {1225}; - //synapse_output = vector {synapse(608,609),synapse(609,608),synapse(609,610)}; - synapse_output = vector {synapse(0,1),synapse(0,50),synapse(0,100)}; + //synapse_output = vector {Synapse(608,609),Synapse(609,608),Synapse(609,610)}; + synapse_output = vector {Synapse(0,1),Synapse(0,50),Synapse(0,100)}; } else if (Nl_exc == 40) { //exc_neuron_output = vector {816,817,818}; //exc_neuron_output = vector {cNN(20,21),cNN(23,21),cNN(30,21)}; // three neurons: one "as", one "ans" and one "e" //exc_neuron_output = vector {1, 53, 260, 660, 777, 940, 941}; +#if defined MAX_ACTIVITY_NEURON + exc_neuron_output = vector {0, 12, 335}; + inh_neuron_output = vector {1918, 1920, 1940}; // note: 1940 does not receive connection from 0 + synapse_output = vector {Synapse(0,12)}; +#elif defined ONESPIKE // i.e., ONESPIKE_EXC or ONESPIKE_INH + exc_neuron_output = vector {6, 17, 68}; + inh_neuron_output = vector {1615, 1690, 1760}; + synapse_output = vector {Synapse(6,68)}; +#else exc_neuron_output = vector {6, 68}; //inh_neuron_output = vector {1615, 1710, 1750}; inh_neuron_output = vector {1615}; - //synapse_output = vector {synapse(777,260), synapse(777,940), synapse(777,941),synapse(660,260), - // synapse(660,940), synapse(660,941), synapse(782,1), synapse(782,53)}; - synapse_output = vector {synapse(6,68)}; + //synapse_output = vector {Synapse(777,260), Synapse(777,940), Synapse(777,941),Synapse(660,260), + // Synapse(660,940), Synapse(660,941), Synapse(782,1), Synapse(782,53)}; + synapse_output = vector {Synapse(6,68)}; +#endif } else if (Nl_exc == 50) { exc_neuron_output = vector {1,640,1300}; inh_neuron_output = vector {2500}; - synapse_output = vector {synapse(1,9),synapse(1,640),synapse(1,1300)}; + synapse_output = vector {Synapse(1,9),Synapse(1,640),Synapse(1,1300)}; } #endif // TWO_NEURONS_ONE_SYNAPSE @@ -397,13 +410,15 @@ int simulate(string working_dir, bool first_sim, string _purpose) cout << "ERROR: Python script 'plotFunctions.py' not found!" << endl; return -2; } +#else + const string path = working_dir; #endif // SEEK_I_0 // Create directory for network plots - const string path = working_dir + "/network_plots"; // path to 'network_plots' directory - if (mkdir(path.c_str(), S_IRWXU | S_IRWXG | S_IROTH | S_IXOTH)) // create 'network_plots' directory + const string nppath = path + "/network_plots"; // path to 'network_plots' directory + if (mkdir(nppath.c_str(), S_IRWXU | S_IRWXG | S_IROTH | S_IXOTH)) // create 'network_plots' directory { - cout << "ERROR: failed to create directory \"" << path << "\"!" << endl; + cout << "ERROR: failed to create directory \"" << nppath << "\"!" << endl; return -3; } @@ -489,6 +504,10 @@ int simulate(string working_dir, bool first_sim, string _purpose) int tb_recall_start = st_recall.getStimulationStart(); // time bin in which recall stimulation begins net.setStimulationEnd(tb_stim_end); +#if PRINT_LEARN_STIMULUS == ON + st_learn.plotAll("learning_stimulus"); +#endif + if (tb_start > 0) logf << "Network state loaded, starting simulation from t = " << dtos(tb_start*dt, 3) << " s" << endl; @@ -629,13 +648,13 @@ int simulate(string working_dir, bool first_sim, string _purpose) << "theta(" << exc_neuron_output[nn] << ")\t\t" // exc. neuron threshold #endif #if PROTEIN_POOLS == POOLS_C - << "p^C(" << exc_neuron_output[nn] << ")\t\t"; // exc. neuron unspecific protein amount + << "p^C(" << exc_neuron_output[nn] << ")\t\t"; // exc. neuron unspecific (common) protein amount #elif PROTEIN_POOLS == POOLS_PD << "p^LTP(" << exc_neuron_output[nn] << ")\t\t" // exc. neuron LTP protein amount << "p^LTD(" << exc_neuron_output[nn] << ")\t\t"; // exc. neuron LTD protein amount #elif PROTEIN_POOLS == POOLS_PCD << "p^LTP(" << exc_neuron_output[nn] << ")\t\t" // exc. neuron LTP protein amount - << "p^C(" << exc_neuron_output[nn] << ")\t\t" // exc. neuron unspecific protein amount + << "p^C(" << exc_neuron_output[nn] << ")\t\t" // exc. neuron unspecific protein amount << "p^LTD(" << exc_neuron_output[nn] << ")\t\t"; // exc. neuron LTD protein amount #endif } @@ -700,6 +719,15 @@ int simulate(string working_dir, bool first_sim, string _purpose) net.setSingleNeuronStimulus(0, st_learn2); #endif +#elif defined MAX_ACTIVITY_NEURON + // only stimulate neuron 0 + net.setSingleNeuronStimulus(0, st_learn); +#elif defined ONESPIKE_EXC + // only stimulate neuron 6 + net.setSingleNeuronStimulus(6, st_learn); +#elif defined ONESPIKE_INH + // only stimulate neuron 1615 + net.setSingleNeuronStimulus(1615, st_learn); #else #if defined MEMORY_CONSOLIDATION_P1 @@ -848,7 +876,7 @@ int simulate(string working_dir, bool first_sim, string _purpose) #endif } - if (j == n) // very last timestep + if (j == n) // very last timestep- { #if !defined MEMORY_CONSOLIDATION_P1 && SAVE_NET_STATE == ON net.saveNetworkState("saved_state.txt", j); @@ -880,7 +908,7 @@ int simulate(string working_dir, bool first_sim, string _purpose) logf << "Tags will have vanished at t = " << dtos(net.getTagVanishTime(), 3) << " s" << endl; #if PROTEIN_POOLS == POOLS_C || PROTEIN_POOLS == POOLS_PCD - logf << "Protein synthesis (C) will have ended at t = " << dtos(ps_end[1], 3) << " s" << endl; + logf << "Protein synthesis (C) will have ended at t = " << dtos(ps_end[1], 3) << " s" << endl; #endif #if PROTEIN_POOLS == POOLS_PD || PROTEIN_POOLS == POOLS_PCD logf << "Protein synthesis (P) will have ended at t = " << dtos(ps_end[0], 3) << " s" << endl @@ -972,14 +1000,14 @@ int simulate(string working_dir, bool first_sim, string _purpose) << net.getVoltageThreshold(exc_neuron_output[nn]) << "\t\t" // exc. neuron threshold #endif #if PROTEIN_POOLS == POOLS_C - << net.getCProteinAmount(exc_neuron_output[nn]) << "\t\t"; // exc. neuron unspecific protein amount + << net.getCProteinAmount(exc_neuron_output[nn]) << "\t\t"; // exc. neuron unspecific (common) protein amount #elif PROTEIN_POOLS == POOLS_PD << net.getPProteinAmount(exc_neuron_output[nn]) << "\t\t" // exc. neuron LTP protein amount << net.getDProteinAmount(exc_neuron_output[nn]) << "\t\t"; // exc. neuron LTD protein amount #elif PROTEIN_POOLS == POOLS_PCD << net.getPProteinAmount(exc_neuron_output[nn]) << "\t\t" // exc. neuron LTP protein amount - << net.getCProteinAmount(exc_neuron_output[nn]) << "\t\t" // exc. neuron unspecific protein amount + << net.getCProteinAmount(exc_neuron_output[nn]) << "\t\t" // exc. neuron unspecific (common) protein amount << net.getDProteinAmount(exc_neuron_output[nn]) << "\t\t"; // exc. neuron LTD protein amount #endif } @@ -1002,6 +1030,10 @@ int simulate(string working_dir, bool first_sim, string _purpose) if (sn < synapse_output.size()-1) txt_data << "\t\t"; } +#if PROTEIN_POOLS == POOLS_C || PROTEIN_POOLS == POOLS_PCD + txt_data << fixed + << "\t\t" << net.getThreshold(THRP_C, THRW_PRO); +#endif txt_data << "\r\n"; #ifndef TWO_NEURONS_ONE_SYNAPSE @@ -1049,7 +1081,7 @@ int simulate(string working_dir, bool first_sim, string _purpose) { for (int n=0; n < pow2(Nl_exc); n++) { - *txt_net_t << fixed << net.getEarlySynapticStrength(synapse(m,n)); + *txt_net_t << fixed << net.getEarlySynapticStrength(Synapse(m,n)); if (n < pow2(Nl_exc) - 1) *txt_net_t << "\t\t"; @@ -1063,7 +1095,7 @@ int simulate(string working_dir, bool first_sim, string _purpose) { for (int n=0; n < pow2(Nl_exc); n++) { - *txt_net_t << fixed << net.getLateSynapticStrength(synapse(m,n)); + *txt_net_t << fixed << net.getLateSynapticStrength(Synapse(m,n)); if (n < pow2(Nl_exc) - 1) *txt_net_t << "\t\t"; @@ -1146,12 +1178,10 @@ int simulate(string working_dir, bool first_sim, string _purpose) logf << "Mean firing rate (exc. population): " << dtos(mfr, 6, true) << " +- " << dtos(sdfr, 6, true) << endl << "Mean firing rate (inh. population): " << dtos(mfr_inh, 6, true) << " +- " << dtos(sdfr_inh, 6, true) << endl; - #ifdef SEEK_I_0 *seekic = mfr; // return mean firing rate in "SEEK_I_0" mode #endif - // ============================================================================================================================== // Create mean firing rate map plots @@ -1225,16 +1255,16 @@ int simulate(string working_dir, bool first_sim, string _purpose) createExcNeuronPlot(exc_neuron_output, gplscript, t_max); #if PROTEIN_POOLS == POOLS_C // spare one columns for exc. neurons because of protein synthesis createInhNeuronPlot(inh_neuron_output, gplscript, t_max, 3*exc_neuron_output.size()); - createSynapsePlot(synapse_output, gplscript, t_max, prot_learn, h_0, net.getThreshold(1,2), net.getThreshold(1,3), net.getThreshold(1,1), net.getThreshold(2,1), - 3*exc_neuron_output.size()+2*inh_neuron_output.size()); + createSynapsePlot(synapse_output, gplscript, t_max, prot_learn, h_0, net.getThreshold(THRP_P, THRW_TAG), net.getThreshold(THRP_C, THRW_PRO), net.getThreshold(THRP_P, THRW_CA), net.getThreshold(THRP_D, THRW_CA), + 3*exc_neuron_output.size()+2*inh_neuron_output.size()); #elif PROTEIN_POOLS == POOLS_PD // spare two columns for exc. neurons because of protein synthesis createInhNeuronPlot(inh_neuron_output, gplscript, t_max, 4*exc_neuron_output.size()); - createSynapsePlot(synapse_output, gplscript, t_max, prot_learn, h_0, net.getThreshold(1,2), net.getThreshold(1,3), net.getThreshold(1,1), net.getThreshold(2,1), - 4*exc_neuron_output.size()+2*inh_neuron_output.size()); + createSynapsePlot(synapse_output, gplscript, t_max, prot_learn, h_0, net.getThreshold(THRP_P, THRW_TAG), net.getThreshold(THRP_P, THRW_PRO), net.getThreshold(THRP_P, THRW_CA), net.getThreshold(THRP_D, THRW_CA), + 4*exc_neuron_output.size()+2*inh_neuron_output.size()); #elif PROTEIN_POOLS == POOLS_PCD // spare three columns for exc. neurons because of protein synthesis createInhNeuronPlot(inh_neuron_output, gplscript, t_max, 5*exc_neuron_output.size()); - createSynapsePlot(synapse_output, gplscript, t_max, prot_learn, h_0, net.getThreshold(1,2), net.getThreshold(1,3), net.getThreshold(1,1), net.getThreshold(2,1), - 5*exc_neuron_output.size()+2*inh_neuron_output.size()); + createSynapsePlot(synapse_output, gplscript, t_max, prot_learn, h_0, net.getThreshold(THRP_P, THRW_TAG), net.getThreshold(THRP_P, THRW_PRO), net.getThreshold(THRP_P, THRW_CA), net.getThreshold(THRP_D, THRW_CA), + 5*exc_neuron_output.size()+2*inh_neuron_output.size()); #endif // ============================================================================================================================== @@ -1244,9 +1274,27 @@ int simulate(string working_dir, bool first_sim, string _purpose) createMeanWeightPlotCA(gplscript, t_max, h_0); // in the cell assembly createMeanWeightPlotControl(gplscript, t_max, h_0); // in the control subpopulation -#elif defined TWO_NEURONS_ONE_SYNAPSE_MIN - #if PROTEIN_POOLS == POOLS_C - plot2N1SMINSimResults(h_0, net.getThreshold(1,2), net.getThreshold(1,3), net.getThreshold(1,1), net.getThreshold(2,1)); +#endif + + // "plotMinSimResults": overview over important observables for one synapse and one neuron (in the network) +#if PROTEIN_POOLS == POOLS_C + #if defined MAX_ACTIVITY_NEURON + plotMinSimResults(1, 16, h_0, net.getThreshold(THRP_P, THRW_TAG), net.getThreshold(THRP_C, THRW_PRO), net.getThreshold(THRP_P, THRW_CA), net.getThreshold(THRP_D, THRW_CA), "traces_0.svg"); + plotMinSimResults(7, 16, h_0, net.getThreshold(THRP_P, THRW_TAG), net.getThreshold(THRP_C, THRW_PRO), net.getThreshold(THRP_P, THRW_CA), net.getThreshold(THRP_D, THRW_CA), "traces_335.svg"); + plotMinSimResults(14, -1, h_0, net.getThreshold(THRP_P, THRW_TAG), net.getThreshold(THRP_C, THRW_PRO), net.getThreshold(THRP_P, THRW_CA), net.getThreshold(THRP_D, THRW_CA), "traces_1940.svg"); + #elif defined ONESPIKE_EXC + plotMinSimResults(1, 16, h_0, net.getThreshold(THRP_P, THRW_TAG), net.getThreshold(THRP_C, THRW_PRO), net.getThreshold(THRP_P, THRW_CA), net.getThreshold(THRP_D, THRW_CA), "traces_6.svg"); + plotMinSimResults(7, 16, h_0, net.getThreshold(THRP_P, THRW_TAG), net.getThreshold(THRP_C, THRW_PRO), net.getThreshold(THRP_P, THRW_CA), net.getThreshold(THRP_D, THRW_CA), "traces_68.svg"); + plotMinSimResults(14, -1, h_0, net.getThreshold(THRP_P, THRW_TAG), net.getThreshold(THRP_C, THRW_PRO), net.getThreshold(THRP_P, THRW_CA), net.getThreshold(THRP_D, THRW_CA), "traces_1760.svg"); + #elif defined ONESPIKE_INH + plotMinSimResults(4, -1, h_0, net.getThreshold(THRP_P, THRW_TAG), net.getThreshold(THRP_C, THRW_PRO), net.getThreshold(THRP_P, THRW_CA), net.getThreshold(THRP_D, THRW_CA), "traces_17.svg"); + plotMinSimResults(10, -1, h_0, net.getThreshold(THRP_P, THRW_TAG), net.getThreshold(THRP_C, THRW_PRO), net.getThreshold(THRP_P, THRW_CA), net.getThreshold(THRP_D, THRW_CA), "traces_1615.svg"); + plotMinSimResults(12, -1, h_0, net.getThreshold(THRP_P, THRW_TAG), net.getThreshold(THRP_C, THRW_PRO), net.getThreshold(THRP_P, THRW_CA), net.getThreshold(THRP_D, THRW_CA), "traces_1690.svg"); + #elif defined TWO_NEURONS_ONE_SYNAPSE + plotMinSimResults(1, 7, h_0, net.getThreshold(THRP_P, THRW_TAG), net.getThreshold(THRP_C, THRW_PRO), net.getThreshold(THRP_P, THRW_CA), net.getThreshold(THRP_D, THRW_CA), "traces_0.svg"); + plotMinSimResults(4, 7, h_0, net.getThreshold(THRP_P, THRW_TAG), net.getThreshold(THRP_C, THRW_PRO), net.getThreshold(THRP_P, THRW_CA), net.getThreshold(THRP_D, THRW_CA), "traces_1.svg"); + #else + plotMinSimResults(1, 9, h_0, net.getThreshold(THRP_P, THRW_TAG), net.getThreshold(THRP_C, THRW_PRO), net.getThreshold(THRP_P, THRW_CA), net.getThreshold(THRP_D, THRW_CA), "traces_6.svg"); #endif #endif // ============================================================================================================================== @@ -1301,28 +1349,31 @@ void setSeekICVar(double *_seekic) * Sets the simulation parameters on given values and resets network(s) */ void setParams(double _I_0, double _sigma_WN, double _tau_syn, double _w_ee, double _w_ei, double _w_ie, double _w_ii, double _oscill_inp_mean, double _oscill_inp_amp, - string _prot_learn, string _prot_recall, int _output_period, int _N_stim, double _theta_p, double _theta_d, - double _Ca_pre, double _Ca_post, double _theta_pro_P, double _theta_pro_C, double _theta_pro_D, double _recall_fraction) + string _prot_learn, string _prot_recall, int _N_stim, double _recall_fraction, + double _theta_p, double _theta_d, double _Ca_pre, double _Ca_post, + double _theta_pro_P, double _theta_pro_C, double _theta_pro_D, + int _nm_paradigm_index, double _nm_amp, double _nm_begin, double _nm_end, int _output_period) { -#if OSCILL_INP != OFF - oscill_inp_mean = _oscill_inp_mean; - oscill_inp_amp = _oscill_inp_amp; -#endif - tau_syn = _tau_syn; - prot_learn = _prot_learn; - prot_recall = _prot_recall; - recall_fraction = _recall_fraction; - output_period = _output_period; - N_stim = _N_stim; net.setConstCurrent(_I_0); net.setSigma(_sigma_WN); + tau_syn = _tau_syn; net.setSynTimeConstant(_tau_syn); w_ei = _w_ei; w_ie = _w_ie; w_ii = _w_ii; net.setCouplingStrengths(_w_ee, _w_ei, _w_ie, _w_ii); +#if OSCILL_INP != OFF + oscill_inp_mean = _oscill_inp_mean; + oscill_inp_amp = _oscill_inp_amp; +#endif + prot_learn = _prot_learn; + prot_recall = _prot_recall; + N_stim = _N_stim; + recall_fraction = _recall_fraction; net.setCaConstants(_theta_p, _theta_d, _Ca_pre, _Ca_post); net.setPSThresholds(_theta_pro_P, _theta_pro_C, _theta_pro_D); + net.setNeuromodulationParameters(_nm_paradigm_index, _nm_amp, _nm_begin, _nm_end); + output_period = _output_period; net.reset(); } diff --git a/simulation-code/Neuron.cpp b/simulation-code/Neuron.cpp index 5552191..55d4bd0 100644 --- a/simulation-code/Neuron.cpp +++ b/simulation-code/Neuron.cpp @@ -83,7 +83,7 @@ vector outgoing; // vector of all outgoing connections to neurons in a netw Stimulus cst; // current stimulus for this neuron minstd_rand0 rg; // default uniform generator for random numbers (seed is chosen in constructor) normal_distribution norm_dist; // normal distribution to obtain Gaussian white noise, constructed in Neuron class constructor -#ifdef TWO_NEURONS_ONE_SYNAPSE +#if STIM_TYPE == POISSON_STIMULATION bool poisson_neuron; // indicates if the neuron solely serves as Poisson spike generator #endif @@ -633,7 +633,7 @@ void processTimeStep(int tb_step, int tb_init) I_int = (I_int_exc * (V_exc_syn_rev - V) + I_int_inh * (V_inh_syn_rev - V)) / 1000.; // divide by 1000 to get from nS*mV=pA to nA #endif -#ifdef TWO_NEURONS_ONE_SYNAPSE +#if STIM_TYPE == POISSON_STIMULATION if (poisson_neuron == false) { #endif @@ -700,12 +700,9 @@ void processTimeStep(int tb_step, int tb_init) + I_dendr #endif )) * (1. - exp(-delta_t/tau_mem)); // compute mem. pot. in mV (analytical solution) - - - #endif -#ifdef TWO_NEURONS_ONE_SYNAPSE +#if STIM_TYPE == POISSON_STIMULATION } // poisson_neuron == false else @@ -716,23 +713,19 @@ void processTimeStep(int tb_step, int tb_init) #endif if (cst.isSet() && tb_init < 0 && abs(I_stim = cst.get(tb_step)) > EPSILON) // stimulation; get stimulus current in nA { -#if STIM_TYPE == POISSON_STIMULATION - V += R_mem * I_stim; -#else - V += R_mem * I_stim * (1. - exp(-delta_t/tau_mem)); -#endif - -#ifdef TWO_NEURONS_ONE_SYNAPSE +#if STIM_TYPE == POISSON_STIMULATION || defined TWO_NEURONS_ONE_SYNAPSE_MIN #if NEURON_MODEL == MAT2 - V = ad_th; // definite spiking (the magnitude of I_stim is not important as long as it is finite) + V = ad_th + EPSILON; // definite spiking (the magnitude of I_stim is not important as long as it is greater than zero) #elif NEURON_MODEL == LIF - V = V_th; + V = V_th + EPSILON; #endif +#else + V += R_mem * I_stim * (1. - exp(-delta_t/tau_mem)); #endif } if (refractory > EPSILON // if in refractory period -#ifdef TWO_NEURONS_ONE_SYNAPSE +#if STIM_TYPE == POISSON_STIMULATION && poisson_neuron == false #endif ) @@ -881,7 +874,7 @@ int getType() const return type; } -#ifdef TWO_NEURONS_ONE_SYNAPSE +#if STIM_TYPE == POISSON_STIMULATION /*** setPoisson *** * Allows for making this neuron a Poisson neuron (used to generate Poissonian spikes only, without own voltage dynamics) * * - bool _poisson_neuron: indicates if the neuron shall be Poissonian */ @@ -991,7 +984,7 @@ Neuron(const double _dt) : I_0 = 0.; tau_OU = 0.005; // estimated -#ifdef TWO_NEURONS_ONE_SYNAPSE +#if STIM_TYPE == POISSON_STIMULATION poisson_neuron = false; #endif diff --git a/simulation-code/Plots.cpp b/simulation-code/Plots.cpp index e2e0d39..ecea79d 100644 --- a/simulation-code/Plots.cpp +++ b/simulation-code/Plots.cpp @@ -619,7 +619,7 @@ void createMeanWeightPlotControl(ofstream &gpl, double t_max, double h_0) * - theta_p: potentiation threshold for Ca dynamics * * - theta_d: depression threshold for Ca dynamics * * - plh_cols: number of columns to skip for they contain neuron data */ -void createSynapsePlot(vector synapse_output, ofstream &gpl, double t_max, string stimulus, double h_0, double theta_tag, +void createSynapsePlot(vector synapse_output, ofstream &gpl, double t_max, string stimulus, double h_0, double theta_tag, double theta_pro, double theta_p, double theta_d, int plh_cols) { string x_max; // maximum x-value as string @@ -838,6 +838,19 @@ void createInhNeuronPlot(vector neuron_output, ofstream &gpl, double t_max, } } +/*** pyPlot *** + * Runs a python function for plotting and adds the command to a script * + * - cmd: the command to run the python function */ + +void pyPlot(string cmd) +{ + ofstream f ("pyplot", ofstream::out | ofstream::app); + f << cmd << endl; // save command in script file + f.close(); + + system(cmd.c_str()); // execute command +} + /*** createNetworkPlotAveragedWeights *** * Creates a plot that displays firing rates and weights averaged over synapses; * @@ -865,11 +878,7 @@ void createNetworkPlotAveragedWeights(double t, double h_0, int Nl, double z_max string("True)\""); // already_averaged #endif - ofstream f ("pyplot", ofstream::out | ofstream::app); - f << py_cmd << endl; // save command in script file - f.close(); - - system(py_cmd.c_str()); // execute command + pyPlot(py_cmd); } /*** createNetworkPlotWeights *** @@ -889,33 +898,31 @@ void createNetworkPlotWeights(double t, double h_0, int Nl, double z_max) string("-0.5, ") + // z_min dtos(z_max,1) + string(", ") + // z_max dtos(Nl,0) + string(")\""); // Nl - - ofstream f ("pyplot", ofstream::out | ofstream::app); - f << py_cmd << endl; // save command in script file - f.close(); - - system(py_cmd.c_str()); // execute command + pyPlot(py_cmd); } #endif - -/*** plot2N1SMINSimResults *** - * Creates a plot that displays the crucial neuron and synapses observables * +/*** plotMinSimResults *** + * Creates a plot that displays crucial observables for one neuron and one synapse * + * - col_neur: the first column containing data of the targeted neuron (the membrane potential, the next two columns contain membrane current and, if applicable, protein amount) * + * - col_syn: the first column containing data of the targeted synapse (the early-phase weight, the next two columns contain late-phase weight and calcium amount) * * - h_0: initial weight * * - theta_tag: tagging threshold * * - theta_pro: protein synthesis threshold * * - theta_p: potentiation threshold for Ca dynamics * * - theta_d: depression threshold for Ca dynamics * + * - store_path: path to the resulting graphics file * */ -#ifdef TWO_NEURONS_ONE_SYNAPSE_MIN -void plot2N1SMINSimResults(double h_0, double theta_tag, double theta_pro, double theta_p, double theta_d) +void plotMinSimResults(int col_neur, int col_syn, double h_0, double theta_tag, double theta_pro, double theta_p, double theta_d, string store_path) { string py_cmd = string("python3 -c \"import plotFunctions as pf; pf.plotMinOverview(\'") - + dateStr("_data.txt\'") + string(", ") - + dtos(h_0, OUTPUT_PRECISION) + (",") + dtos(theta_tag, OUTPUT_PRECISION) + string(",") + + dateStr("_data.txt\'") + (",") + + to_string(col_neur) + (",") + to_string(col_syn) + (",") + + dtos(h_0, OUTPUT_PRECISION) + (",") + dtos(theta_tag, OUTPUT_PRECISION) + (",") + dtos(theta_pro, OUTPUT_PRECISION) + (",") - + dtos(theta_p, OUTPUT_PRECISION) + (",") + dtos(theta_d, OUTPUT_PRECISION) + string(")\""); + + dtos(theta_p, OUTPUT_PRECISION) + (",") + dtos(theta_d, OUTPUT_PRECISION) + (",") + + ("'") + store_path + ("')\""); - system(py_cmd.c_str()); // execute command + pyPlot(py_cmd); } -#endif + diff --git a/simulation-code/SpecialCases.hpp b/simulation-code/SpecialCases.hpp index 51f752a..f8597e2 100644 --- a/simulation-code/SpecialCases.hpp +++ b/simulation-code/SpecialCases.hpp @@ -5,6 +5,103 @@ /*** Copyright 2017-2022 Jannik Luboeinski *** *** licensed under Apache-2.0 (http://www.apache.org/licenses/LICENSE-2.0) ***/ +// Special case of a network stimulated with a single pulse to evoke one spike in neuron 6 +//#define ONESPIKE_EXC +#ifdef ONESPIKE_EXC + #define ONESPIKE + #warning "Special case: ONESPIKE_EXC" +#endif + +// Special case of a network stimulated with a single pulse to evoke one spike in neuron 1615 +//#define ONESPIKE_INH +#ifdef ONESPIKE_INH + #define ONESPIKE + #warning "Special case: ONESPIKE_INH" +#endif + +// Special case of a network stimulated with a single pulse to evoke one spike in one neuron (similar configuration as MEMORY_CONSOLIDATION_P1) +#ifdef ONESPIKE + #undef STIM_TYPE + #define STIM_TYPE DET_STIMULATION + #undef NEURON_MODEL + #define NEURON_MODEL LIF + #undef SYNAPSE_MODEL + #define SYNAPSE_MODEL MONOEXP + #undef PLASTICITY + #define PLASTICITY CALCIUM_AND_STC + #undef RAND_INIT_WEIGHTS + #define RAND_INIT_WEIGHTS OFF + #undef PROTEIN_POOLS + #define PROTEIN_POOLS POOLS_C + #undef STIPULATE_CA + #define STIPULATE_CA OFF + #undef CORE_SHAPE + #define CORE_SHAPE FIRST + #undef CORE_SIZE + #define CORE_SIZE 1 + #undef COND_BASED_SYN + #define COND_BASED_SYN OFF + #undef SYN_SCALING + #define SYN_SCALING OFF + #undef DENDR_SPIKES + #define DENDR_SPIKES OFF + #undef LTP_FR_THRESHOLD + #define LTP_FR_THRESHOLD OFF + #undef LTD_FR_THRESHOLD + #define LTD_FR_THRESHOLD OFF + #undef FF_AFTER_LEARN + #define FF_AFTER_LEARN ON + #undef FF_AFTER_STIM + #define FF_AFTER_STIM OFF + #undef FF_AFTER_NETLOAD + #define FF_AFTER_NETLOAD ON + #undef OSCILL_INP + #define OSCILL_INP OFF + #warning "Special case: ONESPIKE" +#endif + +// Special case of a network that has one neuron spiking at maximal activity (similar configuration as MEMORY_CONSOLIDATION_P1; also shares some features with ONESPIKE_EXC) +//#define MAX_ACTIVITY_NEURON +#ifdef MAX_ACTIVITY_NEURON + #undef STIM_TYPE + #define STIM_TYPE DET_STIMULATION + #undef NEURON_MODEL + #define NEURON_MODEL LIF + #undef SYNAPSE_MODEL + #define SYNAPSE_MODEL MONOEXP + #undef PLASTICITY + #define PLASTICITY CALCIUM_AND_STC + #undef RAND_INIT_WEIGHTS + #define RAND_INIT_WEIGHTS OFF + #undef PROTEIN_POOLS + #define PROTEIN_POOLS POOLS_C + #undef STIPULATE_CA + #define STIPULATE_CA OFF + #undef CORE_SHAPE + #define CORE_SHAPE FIRST + #undef CORE_SIZE + #define CORE_SIZE 1 + #undef COND_BASED_SYN + #define COND_BASED_SYN OFF + #undef SYN_SCALING + #define SYN_SCALING OFF + #undef DENDR_SPIKES + #define DENDR_SPIKES OFF + #undef LTP_FR_THRESHOLD + #define LTP_FR_THRESHOLD OFF + #undef LTD_FR_THRESHOLD + #define LTD_FR_THRESHOLD OFF + #undef FF_AFTER_LEARN + #define FF_AFTER_LEARN ON + #undef FF_AFTER_STIM + #define FF_AFTER_STIM OFF + #undef FF_AFTER_NETLOAD + #define FF_AFTER_NETLOAD ON + #undef OSCILL_INP + #define OSCILL_INP OFF + #warning "Special case: MAX_ACTIVITY_NEURON" +#endif + // Special case to measure the plasticity between two neurons as a function of their firing rates (uses prot_learn and prot_recall to stimulate) // as in Luboeinski, 2021, doctoral thesis //#define PLASTICITY_OVER_FREQ @@ -59,6 +156,7 @@ #endif + // Special case of basic induction protocols for synaptic plasticity, with the same model as Li et al., 2016, https://doi.org/10.1371/journal.pone.0161679 // --> neuron 0 is stimulated via neuron 1, which is depolarized following the stimulus protocol // --> changes some global variables in int main() @@ -145,6 +243,49 @@ #warning "Special case: TWO_NEURONS_ONE_SYNAPSE_MIN" #endif +// Special case of neuron that is first stimulated with a constant current such that its membrane converges to a depolarized state, and then +// relaxes back to the reversal potential +// --> neuron 0 is stimulated via neuron 1, which is depolarized following the stimulus protocol +// --> changes some global variables in int main() +//#define TWO_NEURONS_ONE_SYNAPSE_CONV +#ifdef TWO_NEURONS_ONE_SYNAPSE_CONV + #define TWO_NEURONS_ONE_SYNAPSE // sets general flag of single synapse simulations + #undef STIM_TYPE + #define STIM_TYPE DET_STIMULATION // uses Poisson-like spikes + #undef NEURON_MODEL + #define NEURON_MODEL LIF // uses Leaky Integrate-and-Fire Model + #undef SYNAPSE_MODEL + #define SYNAPSE_MODEL MONOEXP // uses monoexponential synapses + #undef PLASTICITY + #define PLASTICITY CALCIUM_AND_STC // switches on plasticity + #undef RAND_INIT_WEIGHTS + #define RAND_INIT_WEIGHTS OFF + #undef PROTEIN_POOLS + #define PROTEIN_POOLS POOLS_C // uses protein pool setting C + #undef STIPULATE_CA + #define STIPULATE_CA OFF // switches off stipulation of a cell assembly + #undef COND_BASED_SYN + #define COND_BASED_SYN OFF + #undef SYN_SCALING + #define SYN_SCALING OFF + #undef DENDR_SPIKES + #define DENDR_SPIKES OFF + #undef LTP_FR_THRESHOLD + #define LTP_FR_THRESHOLD OFF + #undef LTD_FR_THRESHOLD + #define LTD_FR_THRESHOLD OFF + #undef FF_AFTER_LEARN + #define FF_AFTER_LEARN ON + #undef FF_AFTER_STIM + #define FF_AFTER_STIM OFF + #undef FF_AFTER_NETLOAD + #define FF_AFTER_NETLOAD OFF + #undef OSCILL_INP + #define OSCILL_INP OFF + #undef SAVE_NET_STATE + #define SAVE_NET_STATE OFF // switches off saving the network state + #warning "Special case: TWO_NEURONS_ONE_SYNAPSE_CONV" +#endif // Special case for seeking mean input current //#define SEEK_I_0 0.5 // if defined, I_const will be varied and the I_const value that leads to the defined mean frequency value (e.g., 0.5 Hz) in the absence of plasticity and stimulation will be determined @@ -154,13 +295,12 @@ #warning "Special case: SEEK_I_0" #endif -// Simulations of neuromodulation and assembly outgrowth in TODO Lehr, Luboeinski, Tetzlaff, 2021 -//#define MEMORY_OUTGROWTH_P3 -#ifdef MEMORY_OUTGROWTH_P3 +// Simulations of consolidation of (topologically) spatial and temporal patterns with neuromodulator-dependent STC in Lehr, Luboeinski, Tetzlaff, 2022 +//#define MEMORY_CONSOLIDATION_NM_STC_P3 +#ifdef MEMORY_CONSOLIDATION_NM_STC_P3 #define MEMORY_CONSOLIDATION_P1 - // in script, rename connections.txt to connections_P1.txt or connections.txt.bak #define CORE_SIZE_CMD 150 - #warning "Special case: MEMORY_OUTGROWTH_P3" + #warning "Special case: MEMORY_CONSOLIDATION_NM_STC_P3" #endif // Simulations to learn and consolidate organizational paradigms of attractor memories as in Luboeinski, 2021, doctoral thesis @@ -460,7 +600,7 @@ // Special case for simulations of memory consolidation and recall with intermediate recall in Luboeinski and Tetzlaff, 2021, https://doi.org/10.1038/s42003-021-01778-y // (using the learning stimulation to apply an intermediate recall stimulus) -//#define INTERMEDIATE_RECALL_LT2021 +//#define INTERMEDIATE_RECALL_P1 #ifdef INTERMEDIATE_RECALL_P1 #define MEMORY_CONSOLIDATION_P1 #define CORE_SIZE_CMD 150 diff --git a/simulation-code/Stimulus.cpp b/simulation-code/Stimulus.cpp index de05a34..0864b7a 100644 --- a/simulation-code/Stimulus.cpp +++ b/simulation-code/Stimulus.cpp @@ -271,6 +271,40 @@ class Stimulus } + + /*** isShapeDefined *** + * Tests whether or not a deterministic shape of the Interval object is defined * + * - return: true if a shape is defined, false if not */ + inline bool isShapeDefined() const + { + if (n > 0) + return true; + else + return false; + } + + /*** readStimulusShape *** + * Returns the value of the stimulus shape (the stimulus period) at a certain time bin * + * - i: time bin in stimulus period + * - return: stimulus shape at given time */ + double readStimulusShape(int i) const + { + if (isShapeDefined()) + return shape[i % n]; + else + return 0.; + } + + /*** freeShape *** + * Frees the memory reserved for the shape array, in case there is reserved memory */ + void freeShape() + { + if (isShapeDefined()) + { + delete[] shape; + n = 0; + } + } /*** get *** * Returns the current magnitude of the stimulation in the interval that may consist of a deterministic * @@ -281,10 +315,7 @@ class Stimulus { double ret; - if (n > 0) - ret = shape[(t_step-start) % n]; // deterministic contribution - else - ret = 0.; + ret = readStimulusShape(t_step-start); // deterministic contribution #if STIM_TYPE == POISSON_STIMULATION @@ -307,41 +338,6 @@ class Stimulus return ret; } - /*** readStimulusShape *** - * Returns the value of the stimulus shape (the stimulus period) at a certain time bin * - * - i: time bin in stimulus period - * - return: stimulus shape at given time */ - double readStimulusShape(int i) const - { - if (n > 0) - return shape[i % n]; - else - return 0.; - } - - - /*** isShapeDefined *** - * Tests whether or not a deterministic shape of the Interval object is defined * - * - return: false if a shape is defined, true if not */ - bool isShapeDefined() const - { - if (n > 0) - return false; - else - return true; - } - - /*** freeShape *** - * Frees the memory reserved for the shape array, in case there is reserved memory */ - void freeShape() - { - if (n > 0) - { - delete[] shape; - n = 0; - } - } - /*** copy *** * Copies attributes from another Interval object into this (used by copy constuctor and assignment operator) * * - org: reference to an Interval object */ @@ -699,24 +695,35 @@ class Stimulus if (!f.is_open()) throw runtime_error(string("File ") + string(txt) + string(" could not be opened.")); - cout << "stimulation_start = " << stimulation_start << ", stimulation_end = " << stimulation_end << endl; - cout << "dt = " << dt << endl; - - + cout << "Plotting Stimulus object labeled '" << name << "'" << endl; + cout << "\tstimulation_start = " << stimulation_start*dt << " s, stimulation_end = " << stimulation_end*dt << " s" << endl; + cout << "\tdt = " << dt << " s" << endl; // write data file + f << fixed << (stimulation_start-100)*dt << "\t\t\t" // some timesteps before stimulation + << 0. << "\t\t\t" << false << "\t\t\t" << NAN << "\t\t\t" << NAN << "\t\t\t" +#if STIM_PREPROC == ON + << 0 +#endif + << endl + << fixed << (stimulation_start-1)*dt << "\t\t\t" + << 0. << "\t\t\t" << false << "\t\t\t" << NAN << "\t\t\t" << NAN << "\t\t\t" +#if STIM_PREPROC == ON + << 0 +#endif + << endl; for (int t_step=stimulation_start; t_step::quiet_NaN(); + double shape = 0.; bool poisson = false; - double s_mean = std::numeric_limits::quiet_NaN(); - double s_sigma = std::numeric_limits::quiet_NaN(); + double s_mean = NAN; + double s_sigma = NAN; for (int i=0; i= intervals[i].start && t_step < intervals[i].end) { - if (!intervals[i].isShapeDefined()) + if (intervals[i].isShapeDefined()) shape = intervals[i].readStimulusShape(t_step-intervals[i].start); #if STIM_TYPE == POISSON_STIMULATION poisson = (intervals[i].getPoissonNeuronNumber() > 0); @@ -730,13 +737,25 @@ class Stimulus } } - f << fixed << double(t_step)*dt << "\t\t\t" + f << fixed << t_step*dt << "\t\t\t" << shape << "\t\t\t" << poisson << "\t\t\t" << s_mean << "\t\t\t" << s_sigma << "\t\t\t" #if STIM_PREPROC == ON << stim_flags[t_step-stimulation_start] #endif << endl; } + f << fixed << (stimulation_end+1)*dt << "\t\t\t" // some timesteps after stimulation + << 0. << "\t\t\t" << false << "\t\t\t" << NAN << "\t\t\t" << NAN << "\t\t\t" +#if STIM_PREPROC == ON + << 0 +#endif + << endl + << fixed << (stimulation_end+100)*dt << "\t\t\t" + << 0. << "\t\t\t" << false << "\t\t\t" << NAN << "\t\t\t" << NAN << "\t\t\t" +#if STIM_PREPROC == ON + << 0 +#endif + << endl; f.close(); // open script file @@ -745,10 +764,10 @@ class Stimulus throw runtime_error(string("File ") + string(gpl) + string(" could not be opened.")); // write script file - f << "set term pdf enhanced font \"FrontPage, 20\" color solid lw 2.5" << endl; + f << "set term pdf enhanced font \"Sans, 20\" color solid lw 2.5" << endl; f << "set output '" << pdf << "'" << endl; - f << "set xlabel \"t / s\"" << endl; - f << "set ylabel \"a.u.\"" << endl; + f << "set xlabel \"Time (s)\"" << endl; + f << "set ylabel \"Stimulation (a.u.)\"" << endl; f << "set y2range [0:2]" << endl; f << "set key horizontal outside top left" << endl; f << "unset y2tics" << endl; diff --git a/simulation-code/StimulusProtocols.cpp b/simulation-code/StimulusProtocols.cpp index 4976a62..e02598c 100644 --- a/simulation-code/StimulusProtocols.cpp +++ b/simulation-code/StimulusProtocols.cpp @@ -84,8 +84,10 @@ Stimulus createStimulusFromProtocols(string prot_learn, string prot_recall, doub st.setPoissonStimulation(stim_strength, N_stim, frequency, index); // Poisson-distributed stimulation must be used } #endif - else if (strstr(pt, "F") == pt && strstr(pt, "D") > pt + 1) // "generic" protocol + else if (strstr(pt, "F") == pt && strstr(pt, "D") > pt + 1) { + // "generic" protocol + // example: "F100D1at10.5" -> stimulation with 100Hz for 0.1s (MIND THE UNUSUAL UNIT!), beginning at t=10.5s double duration, at; char* pt2 = strstr(pt, "D"); char* pt3 = strstr(pt, "at"); @@ -126,26 +128,27 @@ Stimulus createStimulusFromProtocols(string prot_learn, string prot_recall, doub } else if (!prot_learn.compare("MIN2N1S")) { - frequency = 1.; // Hz - //index = st.addStimulationInterval(int(round(2.0/dt)), int(round(2.5/dt))); // add start and end time of stimulation - index = st.addStimulationInterval(int(round(0.01/dt)), int(round(0.015/dt))); // add start and end time of stimulation + frequency = 1.; // Hz, rules that only one pulse (of duration dt, see stimFunc()) is conveyed + index = st.addStimulationInterval(int(round(0.01/dt)), int(round(0.015/dt))); // add time of stimulation (end time is not important here) stimFunc(&st, frequency, stim_strength, N_stim, tau_syn, index); // actually add stimulation to the interval - index = st.addStimulationInterval(int(round(0.02/dt)), int(round(0.025/dt))); // add start and end time of stimulation + index = st.addStimulationInterval(int(round(0.02/dt)), int(round(0.025/dt))); // add time of stimulation (end time is not important here) stimFunc(&st, frequency, stim_strength, N_stim, tau_syn, index); // actually add stimulation to the interval - index = st.addStimulationInterval(int(round(0.03/dt)), int(round(0.035/dt))); // add start and end time of stimulation + index = st.addStimulationInterval(int(round(0.03/dt)), int(round(0.035/dt))); // add time of stimulation (end time is not important here) stimFunc(&st, frequency, stim_strength, N_stim, tau_syn, index); // actually add stimulation to the interval - index = st.addStimulationInterval(int(round(0.04/dt)), int(round(0.045/dt))); // add start and end time of stimulation + index = st.addStimulationInterval(int(round(0.04/dt)), int(round(0.045/dt))); // add time of stimulation (end time is not important here) stimFunc(&st, frequency, stim_strength, N_stim, tau_syn, index); // actually add stimulation to the interval - index = st.addStimulationInterval(int(round(0.05/dt)), int(round(0.055/dt))); // add start and end time of stimulation + index = st.addStimulationInterval(int(round(0.05/dt)), int(round(0.055/dt))); // add time of stimulation (end time is not important here) stimFunc(&st, frequency, stim_strength, N_stim, tau_syn, index); // actually add stimulation to the interval - index = st.addStimulationInterval(int(round(0.06/dt)), int(round(0.065/dt))); // add start and end time of stimulation + index = st.addStimulationInterval(int(round(0.06/dt)), int(round(0.065/dt))); // add time of stimulation (end time is not important here) stimFunc(&st, frequency, stim_strength, N_stim, tau_syn, index); // actually add stimulation to the interval - index = st.addStimulationInterval(int(round(0.1/dt)), int(round(0.105/dt))); // add start and end time of stimulation + index = st.addStimulationInterval(int(round(0.1/dt)), int(round(0.105/dt))); // add time of stimulation (end time is not important here) stimFunc(&st, frequency, stim_strength, N_stim, tau_syn, index); // actually add stimulation to the interval } else if (!prot_learn.compare("MINCONV")) { - + frequency = int(round(1/dt)); // Hz, rules that stimulation occurs in every timestep (see stimFunc()) + index = st.addStimulationInterval(int(round(0.0/dt)), int(round(0.1/dt))); // add start and end time of stimulation + stimFunc(&st, frequency, 0.15, N_stim, tau_syn, index); // actually add stimulation to the interval } else if (!prot_learn.compare("TRIPLET")) { diff --git a/simulation-code/Tools.cpp b/simulation-code/Tools.cpp index 97471df..8df7328 100644 --- a/simulation-code/Tools.cpp +++ b/simulation-code/Tools.cpp @@ -118,25 +118,29 @@ string dtos(double num, int n = 0, bool dont_allow_zero = false) } - /*** dateStr *** - * Returns a string containing a time stamp of 18 char's, * + * Adds a string containing a time stamp of 18 char's, * * optionally added to the start of another string * - * - str [optional]: a character array to which the time stamp will be added * + * - str [optional]: a string to which the time stamp will be added * * - fixdate [optional]: true if this time shall be fixed, false by default * * - return: the date string plus the specified string, if stated */ -string dateStr(string str = "", bool fixdate = false) +static string datestr = ""; +string dateStr(string str = "", bool fixdate = false) { const int len = 19; // time stamp length: 18 characters (plus terminating \0) - static string datestr(""); if (fixdate == true) // if this date is to be fixed, update date { char buf[len]; - time_t t = time(NULL); - struct tm *now = localtime(&t); - strftime(buf, len, "%y-%m-%d_%H-%M-%S", now); - datestr.assign(buf); + string new_datestr; + do + { + time_t t = time(NULL); + struct tm *now = localtime(&t); + strftime(buf, len, "%y-%m-%d_%H-%M-%S", now); + new_datestr.assign(buf); + } while (new_datestr == datestr); + datestr = new_datestr; } return datestr + str; @@ -427,9 +431,9 @@ void showChDirErrMessage() * Gets a random generator seed from the computer clock, guarantees not to return * * the same seed in two subsequent calls (very important!) * * - return: clock counts since epoch of the clock */ -static unsigned int getClockSeed() +static unsigned int last_seed = 0; +unsigned int getClockSeed() { - static int last_seed; int seed; while ( (seed = chrono::system_clock::now().time_since_epoch().count()) == last_seed ) {} @@ -438,6 +442,7 @@ static unsigned int getClockSeed() return seed; } + /*** step *** * Heaviside step function, returns 1 for x >= 0 and 0 else * * (the alternative step_alt returns 1 for x > 0 and 0 else) * diff --git a/simulation-code/build_scripts_misc/compile b/simulation-code/build_scripts_misc/compile new file mode 100644 index 0000000..59f606b --- /dev/null +++ b/simulation-code/build_scripts_misc/compile @@ -0,0 +1,15 @@ +#!/bin/sh + +current_working_dir=${PWD} +cd .. + +out_dir="../simulation-bin/run_binary_misc" +mkdir -p "${out_dir}" + +rm -f code.zip +zip -q -D code.zip * -x *.out *.o *.txt +g++ -std=c++11 -O1 NetworkMain.cpp -Wl,--format=binary -Wl,code.zip -Wl,plotFunctions.py -Wl,--format=default \ + -Wno-unused-result -lboost_serialization -static -o "${out_dir}/net.out" +rm -f code.zip + +cd "${current_working_dir}" diff --git a/simulation-code/build_scripts_misc/compile_2N1S_conv b/simulation-code/build_scripts_misc/compile_2N1S_conv new file mode 100644 index 0000000..0dbd036 --- /dev/null +++ b/simulation-code/build_scripts_misc/compile_2N1S_conv @@ -0,0 +1,21 @@ +#!/bin/sh + +current_working_dir=${PWD} +cd .. + +out_dir="../simulation-bin/run_binary_misc" +mkdir -p "${out_dir}" + +rm -f code.zip +zip -q -D code.zip * -x *.out *.o *.txt + +g++ -std=c++11 \ + -O1 NetworkMain.cpp \ + -Wl,--format=binary -Wl,code.zip -Wl,plotFunctions.py -Wl,--format=default \ + -D TWO_NEURONS_ONE_SYNAPSE_CONV \ + -Wno-unused-result \ + -lboost_serialization -static \ + -o "${out_dir}/2N1S_conv.out" +rm -f code.zip + +cd "${current_working_dir}" diff --git a/simulation-code/build_scripts_misc/compile_CA150 b/simulation-code/build_scripts_misc/compile_CA150 new file mode 100644 index 0000000..bd96bc2 --- /dev/null +++ b/simulation-code/build_scripts_misc/compile_CA150 @@ -0,0 +1,15 @@ +#!/bin/sh + +current_working_dir=${PWD} +cd .. + +out_dir="../simulation-bin/run_binary_misc" +mkdir -p "${out_dir}" + +rm -f code.zip +zip -q -D code.zip * -x *.out *.o *.txt +g++ -std=c++11 -O1 NetworkMain.cpp -Wl,--format=binary -Wl,code.zip -Wl,plotFunctions.py -Wl,--format=default -D MEMORY_CONSOLIDATION_P1 \ + -D CORE_SIZE_CMD=150 -Wno-unused-result -lboost_serialization -static -o "${out_dir}/net150.out" +rm -f code.zip + +cd "${current_working_dir}" diff --git a/simulation-code/build_scripts_misc/compile_max_activity b/simulation-code/build_scripts_misc/compile_max_activity new file mode 100644 index 0000000..4b5462d --- /dev/null +++ b/simulation-code/build_scripts_misc/compile_max_activity @@ -0,0 +1,15 @@ +#!/bin/sh + +current_working_dir=${PWD} +cd .. + +out_dir="../simulation-bin/run_binary_misc" +mkdir -p "${out_dir}" + +rm -f code.zip +zip -q -D code.zip * -x *.out *.o *.txt +g++ -std=c++11 -O1 NetworkMain.cpp -Wl,--format=binary -Wl,code.zip -Wl,plotFunctions.py -Wl,--format=default -D MAX_ACTIVITY_NEURON \ + -Wno-unused-result -lboost_serialization -static -o "${out_dir}/net_max_activity_neuron.out" +rm -f code.zip + +cd "${current_working_dir}" diff --git a/simulation-code/build_scripts_misc/compile_onespike b/simulation-code/build_scripts_misc/compile_onespike new file mode 100644 index 0000000..0e54477 --- /dev/null +++ b/simulation-code/build_scripts_misc/compile_onespike @@ -0,0 +1,17 @@ +#!/bin/sh + +current_working_dir=${PWD} +cd .. + +out_dir="../simulation-bin/run_binary_misc" +mkdir -p "${out_dir}" + +rm -f code.zip +zip -q -D code.zip * -x *.out *.o *.txt +g++ -std=c++11 -O1 NetworkMain.cpp -Wl,--format=binary -Wl,code.zip -Wl,plotFunctions.py -Wl,--format=default -D ONESPIKE_EXC \ + -Wno-unused-result -lboost_serialization -static -o "${out_dir}/net_onespike_exc.out" +g++ -std=c++11 -O1 NetworkMain.cpp -Wl,--format=binary -Wl,code.zip -Wl,plotFunctions.py -Wl,--format=default -D ONESPIKE_INH \ + -Wno-unused-result -lboost_serialization -static -o "${out_dir}/net_onespike_inh.out" +rm -f code.zip + +cd "${current_working_dir}" diff --git a/simulation-code/build_scripts_paper3/compile_nm_psth_c b/simulation-code/build_scripts_paper3/compile_nm_psth_c new file mode 100644 index 0000000..02b5ebe --- /dev/null +++ b/simulation-code/build_scripts_paper3/compile_nm_psth_c @@ -0,0 +1,15 @@ +#!/bin/sh + +current_working_dir=${PWD} +cd .. + +out_dir="../simulation-bin/run_binary_paper3" +mkdir -p "${out_dir}" + +rm -f code.zip +zip -q -D code.zip * -x *.out *.o *.txt +g++ -std=c++11 -O1 NetworkMain.cpp -Wl,--format=binary -Wl,code.zip -Wl,plotFunctions.py -Wl,--format=default -D MEMORY_CONSOLIDATION_NM_STC_P3 \ + -Wno-unused-result -lboost_serialization -static -o "${out_dir}/net.out" +rm -f code.zip + +cd "${current_working_dir}" diff --git a/simulation-code/plotFunctions.py b/simulation-code/plotFunctions.py index 731a513..546d2c2 100644 --- a/simulation-code/plotFunctions.py +++ b/simulation-code/plotFunctions.py @@ -990,44 +990,53 @@ def plotTotalSubPopWeights(filename, h_0, z_min, z_max, Nl_exc, s_CA, title = "T # plotMinOverview # Plots the results of a simulation of synaptic weights changing based on calcium dynamics # data_file: data file containing the values of the membrane potential, weights, calcium amount, etc. over time +# col_neur: the first column containing data of the targeted neuron (the membrane potential, the next two columns contain membrane current and, if applicable, protein amount) +# col_syn: the first column containing data of the targeted synapse (the early-phase weight, the next two columns contain late-phase weight and calcium amount) # h_0: initial weight # theta_tag: tagging threshold # theta_pro: protein synthesis threshold # theta_p: potentiation threshold for Ca dynamics # theta_d: depression threshold for Ca dynamics -# store_path [optional]: path to store resulting graphics file -def plotMinOverview(data_file, h_0, theta_tag, theta_pro, theta_p, theta_d, store_path = '.'): +# store_path [optional]: path to the resulting graphics file +def plotMinOverview(data_file, col_neur, col_syn, h_0, theta_tag, theta_pro, theta_p, theta_d, store_path = './traces.svg'): data_stacked = np.loadtxt(data_file) + xlim_0 = None + xlim_1 = None + xlim_auto = True + fig, axes = plt.subplots(nrows=3, ncols=1, sharex=False, figsize=(10, 10)) # set axis labels for axes[0] axes[0].set_xlabel("Time (ms)") axes[0].set_ylabel("Synaptic weight (%)") + axes[0].set_xlim(xlim_0, xlim_1, auto = xlim_auto) # convert x-axis values to ms data_stacked[:,0] *= 1000 # plot data for axes[0] - axes[0].plot(data_stacked[:,0], data_stacked[:,7]/h_0*100, color="#800000", label='h', marker='None', zorder=10) - axes[0].plot(data_stacked[:,0], (data_stacked[:,8]+1)*100, color="#1f77b4", label='z', marker='None', zorder=9) - axes[0].axhline(y=(theta_pro/h_0+1)*100, label='Protein thresh.', linestyle='-.', color="#dddddd", zorder=5) - axes[0].axhline(y=(theta_tag/h_0+1)*100, label='Tag thresh.', linestyle='dashed', color="#dddddd", zorder=4) - # total weight: color="#ff7f0e" + if col_syn >= 1: # if synapse data exist + axes[0].plot(data_stacked[:,0], data_stacked[:,col_syn]/h_0*100, color="#800000", label='h', marker='None', zorder=10) + axes[0].plot(data_stacked[:,0], (data_stacked[:,col_syn+1]+1)*100, color="#1f77b4", label='z', marker='None', zorder=9) + axes[0].axhline(y=(theta_pro/h_0+1)*100, label='Protein thresh.', linestyle='-.', color="#dddddd", zorder=5) + axes[0].axhline(y=(theta_tag/h_0+1)*100, label='Tag thresh.', linestyle='dashed', color="#dddddd", zorder=4) + # total weight: color="#ff7f0e" - # create legend for axes[0] - axes[0].legend() #loc=(0.75,0.65)) #"center right") + # create legend for axes[0] + axes[0].legend() #loc=(0.75,0.65)) #"center right") # set axis labels for axes[1] (and twin axis ax1twin) axes[1].set_xlabel("Time (ms)") axes[1].set_ylabel("Membrane potential (mV)") + axes[1].set_xlim(xlim_0, xlim_1, auto = xlim_auto) ax1twin = axes[1].twinx() ax1twin.set_ylabel("Current (nA)") # plot data for axes[1] (and twin axis ax1twin) - ax1g1 = axes[1].plot(data_stacked[:,0], data_stacked[:,1], color="#ff0000", label='Membrane pot.', marker='None', zorder=10) - ax1g2 = ax1twin.plot(data_stacked[:,0], data_stacked[:,2], color="#ffee00", label='Membrane curr.', marker='None', zorder=10) + ax1g1 = axes[1].plot(data_stacked[:,0], data_stacked[:,col_neur], color="#ff0000", label='Membrane pot.', marker='None', zorder=10) + ax1g2 = ax1twin.plot(data_stacked[:,0], data_stacked[:,col_neur+1], color="#ffee00", label='Membrane curr.', marker='None', zorder=10) # create common legend for axes[1] and ax1twin #fig.legend(loc=(0.72,0.45)) #"center right") @@ -1035,20 +1044,21 @@ def plotMinOverview(data_file, h_0, theta_tag, theta_pro, theta_p, theta_d, stor handles, labels = axes[1].get_legend_handles_labels() handles_twin, labels_twin = ax1twin.get_legend_handles_labels() axes[1].legend(handles + handles_twin, labels + labels_twin) - #plt.xlim(90, 200) # set axis labels for axes[2] axes[2].set_xlabel("Time (ms)") axes[2].set_ylabel("Calcium or protein amount") + axes[2].set_xlim(xlim_0, xlim_1, auto = xlim_auto) # plot data for axes[2] - axes[2].plot(data_stacked[:,0], data_stacked[:,9], color="#c8c896", label='Ca', marker='None', zorder=8) - axes[2].plot(data_stacked[:,0], data_stacked[:,3], color="#008000", label='p', marker='None', zorder=7) - axes[2].axhline(y=theta_p, label='LTP thresh.', linestyle='dashed', color="#969664", zorder=7) - axes[2].axhline(y=theta_d, label='LTD thresh.', linestyle='dashed', color="#969696", zorder=6) + if col_syn >= 1: # if synapse data exist + axes[2].plot(data_stacked[:,0], data_stacked[:,col_syn+2], color="#c8c896", label='Ca', marker='None', zorder=8) + axes[2].plot(data_stacked[:,0], data_stacked[:,col_neur+2], color="#008000", label='p', marker='None', zorder=7) # protein amount is also only shown if synapse data exist (which means that an E->E synapse is considered) + axes[2].axhline(y=theta_p, label='LTP thresh.', linestyle='dashed', color="#969664", zorder=7) + axes[2].axhline(y=theta_d, label='LTD thresh.', linestyle='dashed', color="#969696", zorder=6) - # create legend for axes[2] - axes[2].legend() #loc=(0.75,0.65)) #"center right") + # create legend for axes[2] + axes[2].legend() #loc=(0.75,0.65)) #"center right") # save figure as vector graphics - fig.savefig(os.path.join(store_path, 'standalone_lif_traces.svg')) + fig.savefig(store_path)