diff --git a/README.md b/README.md index 08a651f..b1574b2 100644 --- a/README.md +++ b/README.md @@ -10,14 +10,15 @@ Pre-compiled binaries of the simulation code for Linux (tested with Kernel versi Furthermore, the package contains Python scripts to analyze the data produced by the simulations. These scripts are located in the directory 'analysis/'. -If you make use of the code or the binaries, please cite our paper: +Please kindly cite our papers if you use the code in your research: - Luboeinski, J., Tetzlaff, C. Memory consolidation and improvement by synaptic tagging and capture in recurrent neural networks. Commun. Biol. 4, 275 (2021). - https://doi.org/10.1038/s42003-021-01778-y +1. Luboeinski, J., Tetzlaff, C. Memory consolidation and improvement by synaptic tagging and capture in recurrent neural networks. Commun. Biol. 4, 275 (2021). https://doi.org/10.1038/s42003-021-01778-y + +2. Luboeinski, J., Tetzlaff, C. Organization and priming of long-term memory representations with two-phase plasticity. bioRxiv (2021). -The paper presents the model that underlies the simulation code provided here, as well as findings derived from the model. -However, the code contains additional features that have not been used in publications yet. Please feel free -to contact us on questions about the code and on further investigations that can be done with it. +The first paper presents the original model that underlies the simulation code and derives findings for the synaptic consolidation of a single memory representation. +The second paper extends the model to investigate the interaction of multiple memory representations in different paradigms. +Additionally, the code contains features that have not been used in publications yet. Please feel free to contact us on questions about the code and on further investigations that can be done with it. ## Simulation code @@ -31,46 +32,66 @@ to contact us on questions about the code and on further investigations that can * 'Stimulus.hpp' - class describing a stimulus * 'StimulusProtocols.hpp' - class to define specific stimulus protocols * 'Definitions.hpp' - general definitions -* 'SpecialCases.hpp' - definitions for special simulations +* 'SpecialCases.hpp' - definitions for special simulations (see this to reproduce results of the papers mentioned above) * 'Tools.hpp' - collection of utility functions * 'Plots.hpp' - collection of plotting functions employing *gnuplot* * 'plotFunctions.py' - collection of plotting functions employing *Matplotlib* ### Compiling and linking -The simulation code comes with shell scripts to compile and link the simulation code for different purposes: -* 'compile' (has the same effect as the included Makefile) - compiles the code for performing network simulations to learn, consolidate and recall a memory representation -* 'compile_2N1S' - compiles the code for applying basic plasticity protocols to a single synapse -* 'compile_irs' - compiles the code for performing network simulations to learn and consolidate a memory representation, apply intermediate stimulation and recall - -### Running the simulation +To build the simulation code, in addition to the included Makefile, the code comes with shell scripts for different purposes. For each paper, the related scripts are located in a specific subdirectory of 'simulation-code/': +* 'build\_scripts\_paper1': + * 'compile_sizes' - compiles the code for performing network simulations to learn, consolidate, and recall a memory representation of a certain size + * 'compile_2N1S' - compiles the code for applying basic plasticity induction protocols to a single synapse + * 'compile_IRS' - compiles the code for performing network simulations to learn and consolidate a memory representation, apply intermediate stimulation, and recall -The simulation is run by executing the binary with or without command line options as defined in 'NetworkMain.cpp' (e.g., via one of the following shell scripts). -Please note that there are other options that have to be set in 'NetworkSimulation.cpp' before compiling and cannot be changed during -runtime. +* 'build\_scripts\_paper2': + * 'compile_organization' - compiles the code to learn and consolidate three memory representations in different organizational paradigms + * 'compile\_organization_noLTD' - compiles the code to learn and consolidate three memory representations in different organizational paradigms, without LTD + * 'compile_activation' - compiles the code to investigate the spontaneous activation in a network in the absence of plasticity + * 'compile_recall' - compiles the code to investigate the recall of different assemblies in the absence of plasticity -The directory 'simulation-bin/' contains the following sample shell scripts: -* 'run' - learn a memory representation, let it consolidate, and recall after 8 hours -* 'run2' - learn a memory representation, save the network state, and recall after 10 seconds; load the network state, let the memory representation consolidate, and recall after 8 hours -* 'run3' - learn a memory representation, save the network state, and recall after 10 seconds; load the network state, apply intermediate stimulation, let the memory representation consolidate, and recall after 8 hours -* 'run_2N1S' - reproduce single-synapse data resulting from basic induction protocols for synaptic plasticity - -The file 'connections.txt' contains the default connectivity matrix used in Luboeinski and Tetzlaff, Commun. Biol., 2020. If this file is absent, the simulation program will automatically generate a new network structure. +### Running the simulation +The simulation is run by executing the binary file with or without command line options (as defined in 'NetworkMain.cpp', e.g., via one of the following shell scripts). +Please note that there are additional preprocessor options that have to be set in 'NetworkSimulation.cpp' before compiling and cannot be changed during +runtime. + +The binaries and run scripts for the papers mentioned above are located in specific subdirectories of 'simulation-bin/': +* 'run\_scripts\_paper1': + * 'run_sizes' - learn a memory representation, save the network state, and recall after 10 seconds; load the network state, let the memory representation consolidate, and recall after 8 hours + * 'run_IRS' - learn a memory representation, save the network state, and recall after 10 seconds; load the network state, apply intermediate stimulation, let the memory representation consolidate, and recall after 8 hours + * 'run_full' - learn a memory representation, let it consolidate, and recall after 8 hours (no fast-forwarding, takes very long) + * 'run_2N1S' - reproduce single-synapse data resulting from basic induction protocols for synaptic plasticity + * 'connections.txt' - the default connectivity matrix used in this paper; if this file is absent, the simulation program will automatically generate a new network structure + +* 'run\_scripts\_paper2': + * 'run\_learn\_cons' - subsequently learn 3 memory representations and let them consolidate for 8 hours + * 'run\_learn\_cons\_noLTD' - subsequently learn 3 memory representations and let them consolidate for 8 hours, without LTD + * 'run\_activation' - simulate the activity in a previously consolidated network for 3 minutes without plasticity (required to run 'run\_learn\_cons' first) + * 'run\_priming\_and\_activation' - prime one of the assemblies in a previously consolidated network at a certain time and then simulate the activity for 3 minutes without plasticity (required to run 'run\_learn\_cons' first) + * 'run\_recall' - apply recall stimuli to the assemblies in a previously consolidated network and in a control network (required to run 'run\_learn\_cons' first) ## Analysis scripts -The following scripts serve to process and analyze the data produced by the simulation code. They were tested to run with Python 3.7.3 and NumPy 1.16.4. Note that some of the script files depend on others. +The following scripts serve to process and analyze the data produced by the simulation code. They were tested to run with Python 3.7.3, NumPy 1.20.1, SciPy 1.6.0, and pandas 1.0.3. +Note that some of the script files depend on others. +Also note that to reproduce the results of only one of the papers mentioned above, not all script files and functions will be required. ### Files * 'adjacencyFunctions.py' - functions to analyze the connectivity and weights (used to compute mean and standard deviation of early- and late-phase weights) -* 'averageFileColumnsAdvanced.py' - averages data columns across files (used to average over multiple weight traces) +* 'analyzeWeights.py' - routine running functions to investigate the synaptic weight structure in a network +* 'assemblyAvalancheStatistics.py' - determines the statistics of avalanche occurrence within the assemblies in a network +* 'averageFileColumnsAdvanced.py' - averages data columns across files (used to average over multiple weight traces or probability distributions, for example) * 'averageWeights-py' - averages across multiple weight matrices * 'calculateMIa.py' - calculates the mutual information from two firing rate distributions * 'calculateQ.py' - calculates the pattern completion coefficient Q for an input-defined cell assembly from a firing rate distribution -* 'extractParamsQMI.py' - recursively extracts the Q and MI measures along with the simulation parameters from directories containing simulation data -* 'meanCorrelations.py' - computes firing rate correlations for neuron pairs from spike raster data and averages over subpopulations of the network -* 'probDistributions.py' - functions to analyze and plot weight and activity distributions -* 'readWeightData.py' - reads the firing rate data and early- and late-phase weight matrices from '[timestamp]\_net\_[time].txt' files produced by the simulation program +* 'extractParamsQMI.py' - recursively extracts the Q and MI measures along with the simulation parameters from directories containing simulation data (intended to process many datasets to produce raster plots) +* 'frequencyAnalysisSpikeRaster.py' - computes the frequency spectrum of spike raster data +* 'meanCorrelations.py' - computes firing-rate correlations for neuron pairs from spike raster data and averages over subpopulations of the network +* 'numberOfSpikesInBins.py' - computes the distribution of spikes per time bin from cell assembly time series data +* 'overlapParadigms.py' - defines paradigms of overlapping cell assemblies +* 'utilityFunctions.py' - diverse utility functions, e.g., to read firing-rate, early- and late-phase weight data from '[timestamp]\_net\_[time].txt' files produced by the simulation program +* 'valueDistributions.py' - functions to analyze and plot weight and firing-rate distributions diff --git a/analysis/adjacencyFunctions.py b/analysis/adjacencyFunctions.py index 0e86bf9..f1bff47 100755 --- a/analysis/adjacencyFunctions.py +++ b/analysis/adjacencyFunctions.py @@ -1,6 +1,6 @@ ####################################################################################### ### Functions that serve to analyze the connectivity, early- and late-phase weights ### -### in a network that contains a cell assembly ### +### in a network that contains multiple cell assemblies ### ####################################################################################### ### Copyright 2019-2021 Jannik Luboeinski @@ -8,28 +8,19 @@ import numpy as np import sys +from utilityFunctions import cond_print -# the assembly core (the neurons that receive learning stimulation) and the recall neurons (the neurons that receive recall stimulation) -core = np.arange(350) -recall_fraction = 0.5 -core_recall = core[0:int(np.floor(recall_fraction*core.shape[0]))] -core_norecall = core[np.logical_not(np.in1d(core, core_recall))] - -Nl = 40 # number of excitatory neurons in one line (Nl^2 is the total number of excitatory neurons) h_0 = 0.420075 # nC, initial synaptic weight and normalization factor for z -all = np.arange(Nl**2) -noncore = all[np.logical_not(np.in1d(all, core))] - np.set_printoptions(precision=8, threshold=1e10, linewidth=200) epsilon = 1e-9 # loadWeightMatrix # Loads complete weight matrix from a file (only for excitatory neurons, though) # filename: name of the file to read the data from +# N_pop: the number of neurons in the considered population # return: the adjacency matrix, the early-phase weight matrix, the late-phase weight matrix, the firing rate vector -def loadWeightMatrix(filename): - +def loadWeightMatrix(filename, N_pop): global h global z global adj @@ -45,24 +36,25 @@ def loadWeightMatrix(filename): rawmatrix_v = rawdata[2].split('\n') rows = len(rawmatrix_v) + N_pop_row = int(round(np.sqrt(N_pop))) # one row of neurons when the population is aligned in a square - if (rows != len(rawmatrix_v[0].split('\t\t'))) or (rows != Nl): + if (rows != len(rawmatrix_v[0].split('\t\t'))) or (rows != N_pop_row): print('Data file error in "' + filename + '"') f.close() exit() - v = np.zeros((Nl,Nl)) - h = np.zeros((Nl**2,Nl**2)) - z = np.zeros((Nl**2,Nl**2)) + v = np.zeros((N_pop_row,N_pop_row)) + h = np.zeros((N_pop,N_pop)) + z = np.zeros((N_pop,N_pop)) - for i in range(Nl**2): - if i < Nl: + for i in range(N_pop): + if i < N_pop_row: value0 = rawmatrix_v[i].split('\t\t') value1 = rawmatrix_h[i].split('\t\t') value2 = rawmatrix_z[i].split('\t\t') - for j in range(Nl**2): - if i < Nl and j < Nl: + for j in range(N_pop): + if i < N_pop_row and j < N_pop_row: v[i][j] = float(value0[j]) h[i][j] = float(value1[j]) z[i][j] = h_0*float(value2[j]) @@ -86,26 +78,7 @@ def getConnectivity(pr = True): nconn = np.sum(adj > 0) line = adj.shape[0] ret = nconn / (line**2 - line) - if (pr): - print("Connectivity of adjacency matrix: " + str(ret)) - return ret - -# getConnectivityInCore -# Computes and prints the connectivity within the core -# pr [optional]: specifies if result is printed -# return: the connectivity as a value between 0 and 1 -def getConnectivityInCore(pr = True): - connections_within_core = 0 - N_core = len(core) - - for n in core: - connections_within_core += len(connectionsFromCore(n, False)) - - ret = connections_within_core / (N_core**2 - N_core) - - if pr: - print("Connectivity within core: " + str(ret)) - + cond_print(pr, "Connectivity of adjacency matrix: " + str(ret)) return ret # areConnected @@ -118,12 +91,10 @@ def areConnected(i, j, pr = True): global adj if adj[i][j] == 1: - if pr: - print("Connection " + str(i) + "->" + str(j) + " does exist!") + cond_print(pr, "Connection " + str(i) + "->" + str(j) + " does exist!") return True elif adj[i][j] == 0: - if pr: - print("Connection " + str(i) + "->" + str(j) + " does NOT exist!") + cond_print(pr, "Connection " + str(i) + "->" + str(j) + " does NOT exist!") return False # incomingConnections @@ -134,8 +105,7 @@ def areConnected(i, j, pr = True): def incomingConnections(i, pr = True): global adj inc = np.where(adj[:, i] == 1)[0] - if pr: - print("Incoming connections to " + str(i) + " (" + str(len(inc)) + "): \n" + str(inc)) + cond_print(pr, "Incoming connections to " + str(i) + " (" + str(len(inc)) + "): \n" + str(inc)) return inc @@ -147,8 +117,7 @@ def incomingConnections(i, pr = True): def outgoingConnections(i, pr = True): global adj out = np.where(adj[i, :] == 1)[0] - if pr: - print("Outgoing connections from " + str(i) + " (" + str(len(out)) + "): \n" + str(out)) + cond_print(pr, "Outgoing connections from " + str(i) + " (" + str(len(out)) + "): \n" + str(out)) return out @@ -163,8 +132,7 @@ def incomingEarlyPhaseWeights(i, pr = True): hi = h[:, i] adj_connections = (adj[:, i] == 1) inc = hi[adj_connections] - if pr: - print("Incoming early-phase weights to " + str(i) + " (" + str(len(inc)) + "): \n" + str(inc)) + cond_print(pr, "Incoming early-phase weights to " + str(i) + " (" + str(len(inc)) + "): \n" + str(inc)) return inc @@ -179,37 +147,10 @@ def outgoingEarlyPhaseWeights(i, pr = True): hi = h[i, :] adj_connections = (adj[i, :] == 1) out = hi[adj_connections] - if pr: - print("Outgoing early-phase weights from " + str(i) + " (" + str(len(out)) + "): \n" + str(out)) + cond_print(pr, "Outgoing early-phase weights from " + str(i) + " (" + str(len(out)) + "): \n" + str(out)) return out -# connectionsFromCore -# Prints and returns all the core neurons from which neuron i receives inputs -# i: neuron index -# pr [optional]: specifies if result is printed -# return: array of neuron numbers -def connectionsFromCore(i, pr = True): - global adj - cfc = core[np.in1d(core, incomingConnections(i, False))].flatten() - if pr: - print("Connections from core to neuron " + str(i) + " (" + str(len(cfc)) + "): \n" + str(cfc)) - - return cfc - -# connectionsFromRSCore -# Prints and returns all the recall-stimulated core neurons from which neuron i receives inputs -# i: neuron index -# pr [optional]: specifies if result is printed -# return: array of neuron numbers -def connectionsFromRSCore(i, pr = True): - global adj - cfrsc = core_recall[np.in1d(core_recall, incomingConnections(i, False))].flatten() - if pr: - print("Connections from recall-stim. core to neuron " + str(i) + " (" + str(len(cfrsc)) + "): \n" + str(cfrsc)) - - return cfrsc - # earlyPhaseWeightsFromSet # Prints and returns all the early-phase synaptic weights incoming to neuron i from a given set of neurons # i: neuron index @@ -224,46 +165,6 @@ def earlyPhaseWeightsFromSet(i, set): return inc_h -# earlyPhaseWeightsFromCore -# Prints and returns all the early-phase synaptic weights incoming to neuron i from core neurons -# i: neuron index -# pr [optional]: specifies if result is printed -# return: array of early-phase weights in units of nC -def earlyPhaseWeightsFromCore(i, pr = True): - inc_h = earlyPhaseWeightsFromSet(i, core) - if pr: - print("Incoming early-phase weights from core to neuron " + str(i) + " (" + str(len(inc_h)) + "): \n" + str(inc_h)) - - return inc_h - -# earlyPhaseWeightsFromRSCore -# Prints and returns all the early-phase synaptic weights incoming to neuron i from recall-stimulated core neurons -# i: neuron index -# pr [optional]: specifies if result is printed -# return: array of early-phase weights in units of nC -def earlyPhaseWeightsFromRSCore(i, pr = True): - inc_h = earlyPhaseWeightsFromSet(i, core_recall) - if pr: - print("Incoming early-phase weights from recall-stim. core to neuron " + str(i) + " (" + str(len(inc_h)) + "): \n" + str(inc_h)) - - return inc_h - -# earlyPhaseWeightsToCore -# Prints and returns all the early-phase synaptic weights outgoing to core neurons from neuron i -# i: neuron index -# pr [optional]: specifies if result is printed -# return: array of early-phase weights in units of nC -def earlyPhaseWeightsToCore(i, pr = True): - global adj - global h - hi = h[i, :] - adj_connections = np.logical_and(adj[i, :] == 1, np.in1d(np.arange(len(hi)), core)) - out_h = hi[adj_connections] - if pr: - print("Outgoing early-phase weights from neuron " + str(i) + " to core (" + str(len(out_h)) + "): \n" + str(out_h)) - - return out_h - # latePhaseWeightsFromSet # Prints and returns all the late-phase synaptic weights incoming to neuron i from a given set of neurons # i: neuron index @@ -278,34 +179,6 @@ def latePhaseWeightsFromSet(i, set): return inc_z -# latePhaseWeightsFromCore -# Prints and returns all the late-phase synaptic weights incoming to neuron i from core neurons -# i: neuron index -# pr [optional]: specifies if result is printed -# return: array of late-phase weights in units of nC -def latePhaseWeightsFromCore(i, pr = True): - inc_z = latePhaseWeightsFromSet(i, core) - if pr: - print("Incoming late-phase weights from core to neuron " + str(i) + " (" + str(len(inc_z)) + "): \n" + str(inc_z)) - - return inc_z - -# latePhaseWeightsToCore -# Prints and returns all the late-phase synaptic weights outgoing to core neurons from neuron i -# i: neuron index -# pr [optional]: specifies if result is printed -# return: array of late-phase weights in units of nC -def latePhaseWeightsToCore(i, pr = True): - global adj - global z - zi = z[i, :] - adj_connections = np.logical_and(adj[i, :] == 1, np.in1d(np.arange(len(zi)), core)) - out_z = zi[adj_connections] - if pr: - print("Outgoing late-phase weights from neuron " + str(i) + " to core (" + str(len(out_z)) + "): \n" + str(out_z)) - - return out_z - # meanEarlyPhaseWeight # Returns the mean early-phase synaptic weight between two sets of neurons; prints the connectivity # and the mean early-phase weight @@ -313,29 +186,30 @@ def latePhaseWeightsToCore(i, pr = True): # set2 [optional]: the seconds set of neurons (postsynaptic); if not specified, connections within "set" are considered # pr [optional]: specifies if result shall be printed # return: early-phase weight in units of nC -def meanEarlyPhaseWeight(set, set2 = [], pr = True): +def meanEarlyPhaseWeight(set, set2 = None, pr = True): summed_weight = 0 connection_num = 0 - self_set = False - if len(set2) <= 0: + if set2 is None: set2 = set # consider internal connections within "set" if "set2" is not specified - self_set = True for n in set2: inc_weights = earlyPhaseWeightsFromSet(n, set) summed_weight += np.sum(inc_weights) connection_num += len(inc_weights) - ret = summed_weight / connection_num + if connection_num > 0: + ret = summed_weight / connection_num - if pr: - if self_set: - print("Self-connectivity: " + str(connection_num / (len(set)**2 - len(set)))) + if set2 is None: + cond_print(pr, "Self-connectivity: " + str(connection_num / (len(set)**2 - len(set)))) else: - print("Connectivity: " + str(connection_num / (len(set)*len(set2)))) + cond_print(pr, "Connectivity: " + str(connection_num / (len(set)*len(set2)))) + else: + ret = np.nan + cond_print(pr, "Connectivity: none") - print("Mean early-phase weight: " + str(ret)) + cond_print(pr, "Mean early-phase weight: " + str(ret)) return ret @@ -345,12 +219,12 @@ def meanEarlyPhaseWeight(set, set2 = [], pr = True): # set2 [optional]: the seconds set of neurons (postsynaptic); if not specified, connections within "set" are considered # pr [optional]: specifies if result shall be printed # return: early-phase weight in units of nC -def sdEarlyPhaseWeight(set, set2 = [], pr = True): +def sdEarlyPhaseWeight(set, set2 = None, pr = True): mean = meanEarlyPhaseWeight(set, set2, False) summed_qu_dev = 0 connection_num = 0 - if len(set2) <= 0: + if set2 is None: set2 = set # consider internal connections within "set" if "set2" is not specified for n in set2: @@ -358,13 +232,16 @@ def sdEarlyPhaseWeight(set, set2 = [], pr = True): summed_qu_dev += np.sum(qu_devs) connection_num += len(qu_devs) - ret = np.sqrt(summed_qu_dev / (connection_num-1)) + if connection_num > 1: + ret = np.sqrt(summed_qu_dev / (connection_num-1)) + else: + ret = np.nan - if pr: - print("Std. dev. of early-phase weight: " + str(ret)) + cond_print(pr, "Std. dev. of early-phase weight: " + str(ret)) return ret + # meanLatePhaseWeight # Returns the mean late-phase synaptic weight between two sets of neurons; prints the connectivity # and the mean late-phase weight @@ -372,29 +249,35 @@ def sdEarlyPhaseWeight(set, set2 = [], pr = True): # set2 [optional]: the seconds set of neurons (postsynaptic); if not specified, connections within "set" are considered # pr [optional]: specifies if result shall be printed # return: late-phase weight in units of nC -def meanLatePhaseWeight(set, set2 = [], pr = True): +def meanLatePhaseWeight(set, set2 = None, pr = True): summed_weight = 0 connection_num = 0 - self_set = False - if len(set2) <= 0: + if set2 is None: set2 = set # consider internal connections within "set" if "set2" is not specified - self_set = True for n in set2: inc_weights = latePhaseWeightsFromSet(n, set) summed_weight += np.sum(inc_weights) connection_num += len(inc_weights) - ret = summed_weight / connection_num + if connection_num > 0: + ret = summed_weight / connection_num + else: + ret = np.nan + + if connection_num > 0: + ret = summed_weight / connection_num - if pr: - if self_set: - print("Self-connectivity: " + str(connection_num / (len(set)**2 - len(set)))) + if set2 is None: + cond_print(pr, "Self-connectivity: " + str(connection_num / (len(set)**2 - len(set)))) else: - print("Connectivity: " + str(connection_num / (len(set)*len(set2)))) + cond_print(pr, "Connectivity: " + str(connection_num / (len(set)*len(set2)))) + else: + ret = np.nan + cond_print(pr, "Connectivity: none") - print("Mean late-phase weight: " + str(ret)) + cond_print(pr, "Mean late-phase weight: " + str(ret)) return ret @@ -417,73 +300,783 @@ def sdLatePhaseWeight(set, set2 = [], pr = True): summed_qu_dev += np.sum(qu_devs) connection_num += len(qu_devs) - ret = np.sqrt(summed_qu_dev / (connection_num-1)) + if connection_num > 1: + ret = np.sqrt(summed_qu_dev / (connection_num-1)) + else: + ret = np.nan - if pr: - print("Std. dev. of late-phase weight: " + str(ret)) + cond_print(pr, "Std. dev. of late-phase weight: " + str(ret)) return ret +# meanCoreWeights +# Computes the mean weights in cores (including intersections) and appends them, together with +# the time for readout, to a file +# ts: timestamp of the data file to read +# time_for_readout: the time of the data file to read +# coreA: array of indices of the first cell assembly (core) neurons +# coreB: array of indices of the second cell assembly (core) neurons +# coreC: array of indices of the third cell assembly (core) neurons +# N_pop: the number of neurons in the considered population +# pr [optional]: specifies if result shall be printed +def meanCoreWeights(ts, time_for_readout, coreA, coreB, coreC, N_pop, pr = True): + + cond_print(pr, "##############################################") + cond_print(pr, "At time", time_for_readout) + loadWeightMatrix(ts + "_net_" + time_for_readout + ".txt", N_pop) + f = open("cores_mean_tot_weights.txt", "a") + + f.write(time_for_readout + "\t\t") + + cond_print(pr, "--------------------------------") + cond_print(pr, "A -> A:") + hm = meanEarlyPhaseWeight(coreA, coreA, pr) + hsd = sdEarlyPhaseWeight(coreA, coreA, pr) + zm = meanLatePhaseWeight(coreA, coreA, pr) + zsd = sdLatePhaseWeight(coreA, coreA, pr) + f.write(str(hm + zm) + "\t\t") + f.write(str(np.sqrt(hsd**2 + zsd**2)) + "\t\t") + cond_print(pr, "Mean total weight: " + str(hm + zm)) + + cond_print(pr, "--------------------------------") + cond_print(pr, "B -> B:") + hm = meanEarlyPhaseWeight(coreB, coreB, pr) + hsd = sdEarlyPhaseWeight(coreB, coreB, pr) + zm = meanLatePhaseWeight(coreB, coreB, pr) + zsd = sdLatePhaseWeight(coreB, coreB, pr) + f.write(str(hm + zm) + "\t\t") + f.write(str(np.sqrt(hsd**2 + zsd**2)) + "\t\t") + cond_print(pr, "Mean total weight: " + str(hm + zm)) + + cond_print(pr, "--------------------------------") + cond_print(pr, "C -> C:") + hm = meanEarlyPhaseWeight(coreC, coreC, pr) + hsd = sdEarlyPhaseWeight(coreC, coreC, pr) + zm = meanLatePhaseWeight(coreC, coreC, pr) + zsd = sdLatePhaseWeight(coreC, coreC, pr) + f.write(str(hm + zm) + "\t\t") + f.write(str(np.sqrt(hsd**2 + zsd**2)) + "\n") + cond_print(pr, "Mean total weight: " + str(hm + zm)) + + f.close() + +# meanWeightMatrix +# Computes the abstract mean weight matrix and write the outcome to a file +# ts: timestamp of the data file to read +# time_for_readout: the time of the data file to read +# coreA: array of indices of the first cell assembly (core) neurons +# coreB: array of indices of the second cell assembly (core) neurons +# coreC: array of indices of the third cell assembly (core) neurons +# N_pop: the number of neurons in the considered population +# pr [optional]: specifies if result shall be printed +def meanWeightMatrix(ts, time_for_readout, coreA, coreB, coreC, N_pop, pr = True): + # define the whole considered population + all = np.arange(N_pop) + + # determine masks of whole cores + mask_coreA = np.in1d(all, coreA) # boolean matrix of neurons in whole core A + mask_coreB = np.in1d(all, coreB) # boolean matrix of neurons in whole core B + mask_coreC = np.in1d(all, coreC) # boolean matrix of neurons in whole core C + + # determine exclusive intersections + mask_I_AB = np.logical_and( np.logical_and(mask_coreA, mask_coreB), np.logical_not(mask_coreC) ) + mask_I_AC = np.logical_and( np.logical_and(mask_coreA, mask_coreC), np.logical_not(mask_coreB) ) + mask_I_BC = np.logical_and( np.logical_and(mask_coreB, mask_coreC), np.logical_not(mask_coreA) ) + mask_I_ABC = np.logical_and( mask_coreA, np.logical_and(mask_coreB, mask_coreC) ) + I_AB = all[mask_I_AB] + I_AC = all[mask_I_AC] + I_BC = all[mask_I_BC] + I_ABC = all[mask_I_ABC] + + # determine exclusive cores by removing exclusive intersections from whole cores + exA = all[np.logical_and(mask_coreA, \ + np.logical_and(np.logical_not(mask_I_AB), \ + np.logical_and(np.logical_not(mask_I_AC), np.logical_not(mask_I_ABC))))] + exB = all[np.logical_and(mask_coreB, \ + np.logical_and(np.logical_not(mask_I_AB), \ + np.logical_and(np.logical_not(mask_I_BC), np.logical_not(mask_I_ABC))))] + exC = all[np.logical_and(mask_coreC, \ + np.logical_and(np.logical_not(mask_I_AC), \ + np.logical_and(np.logical_not(mask_I_BC), np.logical_not(mask_I_ABC))))] + + # determine control subpopulation + control = all[np.logical_not(np.in1d(all, np.concatenate([I_ABC, I_AB, I_AC, I_BC, + coreA, coreB, coreC])))] # all neurons that are not part of an assembly + + cond_print(pr, "##############################################") + cond_print(pr, "At time", time_for_readout) + loadWeightMatrix(ts + "_net_" + time_for_readout + ".txt", N_pop) + f = open("mean_tot_weights_" + time_for_readout + ".txt", "w") + fsd = open("sd_tot_weights_" + time_for_readout + ".txt", "w") + + ### OUTGOING FROM I_ABC ########################### + cond_print(pr, "--------------------------------") + cond_print(pr, "I_ABC -> I_ABC:") + hm = meanEarlyPhaseWeight(I_ABC, I_ABC, pr) + hsd = sdEarlyPhaseWeight(I_ABC, I_ABC, pr) + zm = meanLatePhaseWeight(I_ABC, I_ABC, pr) + zsd = sdLatePhaseWeight(I_ABC, I_ABC, pr) + f.write(str(hm + zm) + "\t\t") + fsd.write(str(np.sqrt(hsd**2 + zsd**2)) + "\t\t") + cond_print(pr, "Mean total weight: " + str(hm + zm)) + + cond_print(pr, "--------------------------------") + cond_print(pr, "I_ABC -> I_AC:") + hm = meanEarlyPhaseWeight(I_ABC, I_AC, pr) + hsd = sdEarlyPhaseWeight(I_ABC, I_AC, pr) + zm = meanLatePhaseWeight(I_ABC, I_AC, pr) + zsd = sdLatePhaseWeight(I_ABC, I_AC, pr) + f.write(str(hm + zm) + "\t\t") + fsd.write(str(np.sqrt(hsd**2 + zsd**2)) + "\t\t") + cond_print(pr, "Mean total weight: " + str(hm + zm)) + + cond_print(pr, "--------------------------------") + cond_print(pr, "I_ABC -> ~A:") + hm = meanEarlyPhaseWeight(I_ABC, exA, pr) + hsd = sdEarlyPhaseWeight(I_ABC, exA, pr) + zm = meanLatePhaseWeight(I_ABC, exA, pr) + zsd = sdLatePhaseWeight(I_ABC, exA, pr) + f.write(str(hm + zm) + "\t\t") + fsd.write(str(np.sqrt(hsd**2 + zsd**2)) + "\t\t") + cond_print(pr, "Mean total weight: " + str(hm + zm)) + + cond_print(pr, "--------------------------------") + cond_print(pr, "I_ABC -> I_AB:") + hm = meanEarlyPhaseWeight(I_ABC, I_AB, pr) + hsd = sdEarlyPhaseWeight(I_ABC, I_AB, pr) + zm = meanLatePhaseWeight(I_ABC, I_AB, pr) + zsd = sdLatePhaseWeight(I_ABC, I_AB, pr) + f.write(str(hm + zm) + "\t\t") + fsd.write(str(np.sqrt(hsd**2 + zsd**2)) + "\t\t") + cond_print(pr, "Mean total weight: " + str(hm + zm)) + + cond_print(pr, "--------------------------------") + cond_print(pr, "I_ABC -> ~B:") + hm = meanEarlyPhaseWeight(I_ABC, exB, pr) + hsd = sdEarlyPhaseWeight(I_ABC, exB, pr) + zm = meanLatePhaseWeight(I_ABC, exB, pr) + zsd = sdLatePhaseWeight(I_ABC, exB, pr) + f.write(str(hm + zm) + "\t\t") + fsd.write(str(np.sqrt(hsd**2 + zsd**2)) + "\t\t") + cond_print(pr, "Mean total weight: " + str(hm + zm)) + + cond_print(pr, "--------------------------------") + cond_print(pr, "I_ABC -> I_BC:") + hm = meanEarlyPhaseWeight(I_ABC, I_BC, pr) + hsd = sdEarlyPhaseWeight(I_ABC, I_BC, pr) + zm = meanLatePhaseWeight(I_ABC, I_BC, pr) + zsd = sdLatePhaseWeight(I_ABC, I_BC, pr) + f.write(str(hm + zm) + "\t\t") + fsd.write(str(np.sqrt(hsd**2 + zsd**2)) + "\t\t") + cond_print(pr, "Mean total weight: " + str(hm + zm)) + + cond_print(pr, "--------------------------------") + cond_print(pr, "I_ABC -> ~C:") + hm = meanEarlyPhaseWeight(I_ABC, exC, pr) + hsd = sdEarlyPhaseWeight(I_ABC, exC, pr) + zm = meanLatePhaseWeight(I_ABC, exC, pr) + zsd = sdLatePhaseWeight(I_ABC, exC, pr) + f.write(str(hm + zm) + "\t\t") + fsd.write(str(np.sqrt(hsd**2 + zsd**2)) + "\t\t") + cond_print(pr, "Mean total weight: " + str(hm + zm)) + + cond_print(pr, "--------------------------------") + cond_print(pr, "I_ABC -> Control:") + hm = meanEarlyPhaseWeight(I_ABC, control, pr) + hsd = sdEarlyPhaseWeight(I_ABC, control, pr) + zm = meanLatePhaseWeight(I_ABC, control, pr) + zsd = sdLatePhaseWeight(I_ABC, control, pr) + f.write(str(hm + zm) + "\n") + fsd.write(str(np.sqrt(hsd**2 + zsd**2)) + "\n") + cond_print(pr, "Mean total weight: " + str(hm + zm)) + + ### OUTGOING FROM I_AC ########################### + cond_print(pr, "--------------------------------") + cond_print(pr, "I_AC -> I_ABC:") + hm = meanEarlyPhaseWeight(I_AC, I_ABC, pr) + hsd = sdEarlyPhaseWeight(I_AC, I_ABC, pr) + zm = meanLatePhaseWeight(I_AC, I_ABC, pr) + zsd = sdLatePhaseWeight(I_AC, I_ABC, pr) + f.write(str(hm + zm) + "\t\t") + fsd.write(str(np.sqrt(hsd**2 + zsd**2)) + "\t\t") + cond_print(pr, "Mean total weight: " + str(hm + zm)) + + cond_print(pr, "--------------------------------") + cond_print(pr, "I_AC -> I_AC:") + hm = meanEarlyPhaseWeight(I_AC, I_AC, pr) + hsd = sdEarlyPhaseWeight(I_AC, I_AC, pr) + zm = meanLatePhaseWeight(I_AC, I_AC, pr) + zsd = sdLatePhaseWeight(I_AC, I_AC, pr) + f.write(str(hm + zm) + "\t\t") + fsd.write(str(np.sqrt(hsd**2 + zsd**2)) + "\t\t") + cond_print(pr, "Mean total weight: " + str(hm + zm)) + + cond_print(pr, "--------------------------------") + cond_print(pr, "I_AC -> ~A:") + hm = meanEarlyPhaseWeight(I_AC, exA, pr) + hsd = sdEarlyPhaseWeight(I_AC, exA, pr) + zm = meanLatePhaseWeight(I_AC, exA, pr) + zsd = sdLatePhaseWeight(I_AC, exA, pr) + f.write(str(hm + zm) + "\t\t") + fsd.write(str(np.sqrt(hsd**2 + zsd**2)) + "\t\t") + cond_print(pr, "Mean total weight: " + str(hm + zm)) + + cond_print(pr, "--------------------------------") + cond_print(pr, "I_AC -> I_AB:") + hm = meanEarlyPhaseWeight(I_AC, I_AB, pr) + hsd = sdEarlyPhaseWeight(I_AC, I_AB, pr) + zm = meanLatePhaseWeight(I_AC, I_AB, pr) + zsd = sdLatePhaseWeight(I_AC, I_AB, pr) + f.write(str(hm + zm) + "\t\t") + fsd.write(str(np.sqrt(hsd**2 + zsd**2)) + "\t\t") + cond_print(pr, "Mean total weight: " + str(hm + zm)) + + cond_print(pr, "--------------------------------") + cond_print(pr, "I_AC -> ~B:") + hm = meanEarlyPhaseWeight(I_AC, exB, pr) + hsd = sdEarlyPhaseWeight(I_AC, exB, pr) + zm = meanLatePhaseWeight(I_AC, exB, pr) + zsd = sdLatePhaseWeight(I_AC, exB, pr) + f.write(str(hm + zm) + "\t\t") + fsd.write(str(np.sqrt(hsd**2 + zsd**2)) + "\t\t") + cond_print(pr, "Mean total weight: " + str(hm + zm)) + + cond_print(pr, "--------------------------------") + cond_print(pr, "I_AC -> I_BC:") + hm = meanEarlyPhaseWeight(I_AC, I_BC, pr) + hsd = sdEarlyPhaseWeight(I_AC, I_BC, pr) + zm = meanLatePhaseWeight(I_AC, I_BC, pr) + zsd = sdLatePhaseWeight(I_AC, I_BC, pr) + f.write(str(hm + zm) + "\t\t") + fsd.write(str(np.sqrt(hsd**2 + zsd**2)) + "\t\t") + cond_print(pr, "Mean total weight: " + str(hm + zm)) + + cond_print(pr, "--------------------------------") + cond_print(pr, "I_AC -> ~C:") + hm = meanEarlyPhaseWeight(I_AC, exC, pr) + hsd = sdEarlyPhaseWeight(I_AC, exC, pr) + zm = meanLatePhaseWeight(I_AC, exC, pr) + zsd = sdLatePhaseWeight(I_AC, exC, pr) + f.write(str(hm + zm) + "\t\t") + fsd.write(str(np.sqrt(hsd**2 + zsd**2)) + "\t\t") + cond_print(pr, "Mean total weight: " + str(hm + zm)) + + cond_print(pr, "--------------------------------") + cond_print(pr, "I_AC -> Control:") + hm = meanEarlyPhaseWeight(I_AC, control, pr) + hsd = sdEarlyPhaseWeight(I_AC, control, pr) + zm = meanLatePhaseWeight(I_AC, control, pr) + zsd = sdLatePhaseWeight(I_AC, control, pr) + f.write(str(hm + zm) + "\n") + fsd.write(str(np.sqrt(hsd**2 + zsd**2)) + "\n") + cond_print(pr, "Mean total weight: " + str(hm + zm)) + + ### OUTGOING FROM ~A ########################### + cond_print(pr, "--------------------------------") + cond_print(pr, "~A -> I_ABC:") + hm = meanEarlyPhaseWeight(exA, I_ABC, pr) + hsd = sdEarlyPhaseWeight(exA, I_ABC, pr) + zm = meanLatePhaseWeight(exA, I_ABC, pr) + zsd = sdLatePhaseWeight(exA, I_ABC, pr) + f.write(str(hm + zm) + "\t\t") + fsd.write(str(np.sqrt(hsd**2 + zsd**2)) + "\t\t") + cond_print(pr, "Mean total weight: " + str(hm + zm)) + + cond_print(pr, "--------------------------------") + cond_print(pr, "~A -> I_AC:") + hm = meanEarlyPhaseWeight(exA, I_AC, pr) + hsd = sdEarlyPhaseWeight(exA, I_AC, pr) + zm = meanLatePhaseWeight(exA, I_AC, pr) + zsd = sdLatePhaseWeight(exA, I_AC, pr) + f.write(str(hm + zm) + "\t\t") + fsd.write(str(np.sqrt(hsd**2 + zsd**2)) + "\t\t") + cond_print(pr, "Mean total weight: " + str(hm + zm)) + + cond_print(pr, "--------------------------------") + cond_print(pr, "~A -> ~A:") + hm = meanEarlyPhaseWeight(exA, exA, pr) + hsd = sdEarlyPhaseWeight(exA, exA, pr) + zm = meanLatePhaseWeight(exA, exA, pr) + zsd = sdLatePhaseWeight(exA, exA, pr) + f.write(str(hm + zm) + "\t\t") + fsd.write(str(np.sqrt(hsd**2 + zsd**2)) + "\t\t") + cond_print(pr, "Mean total weight: " + str(hm + zm)) + + cond_print(pr, "--------------------------------") + cond_print(pr, "~A -> I_AB:") + hm = meanEarlyPhaseWeight(exA, I_AB, pr) + hsd = sdEarlyPhaseWeight(exA, I_AB, pr) + zm = meanLatePhaseWeight(exA, I_AB, pr) + zsd = sdLatePhaseWeight(exA, I_AB, pr) + f.write(str(hm + zm) + "\t\t") + fsd.write(str(np.sqrt(hsd**2 + zsd**2)) + "\t\t") + cond_print(pr, "Mean total weight: " + str(hm + zm)) + + cond_print(pr, "--------------------------------") + cond_print(pr, "~A -> ~B:") + hm = meanEarlyPhaseWeight(exA, exB, pr) + hsd = sdEarlyPhaseWeight(exA, exB, pr) + zm = meanLatePhaseWeight(exA, exB, pr) + zsd = sdLatePhaseWeight(exA, exB, pr) + f.write(str(hm + zm) + "\t\t") + fsd.write(str(np.sqrt(hsd**2 + zsd**2)) + "\t\t") + cond_print(pr, "Mean total weight: " + str(hm + zm)) + + cond_print(pr, "--------------------------------") + cond_print(pr, "~A -> I_BC:") + hm = meanEarlyPhaseWeight(exA, I_BC, pr) + hsd = sdEarlyPhaseWeight(exA, I_BC, pr) + zm = meanLatePhaseWeight(exA, I_BC, pr) + zsd = sdLatePhaseWeight(exA, I_BC, pr) + f.write(str(hm + zm) + "\t\t") + fsd.write(str(np.sqrt(hsd**2 + zsd**2)) + "\t\t") + cond_print(pr, "Mean total weight: " + str(hm + zm)) + + cond_print(pr, "--------------------------------") + cond_print(pr, "~A -> ~C:") + hm = meanEarlyPhaseWeight(exA, exC, pr) + hsd = sdEarlyPhaseWeight(exA, exC, pr) + zm = meanLatePhaseWeight(exA, exC, pr) + zsd = sdLatePhaseWeight(exA, exC, pr) + f.write(str(hm + zm) + "\t\t") + fsd.write(str(np.sqrt(hsd**2 + zsd**2)) + "\t\t") + cond_print(pr, "Mean total weight: " + str(hm + zm)) + + cond_print(pr, "--------------------------------") + cond_print(pr, "~A -> Control:") + hm = meanEarlyPhaseWeight(exA, control, pr) + hsd = sdEarlyPhaseWeight(exA, control, pr) + zm = meanLatePhaseWeight(exA, control, pr) + zsd = sdLatePhaseWeight(exA, control, pr) + f.write(str(hm + zm) + "\n") + fsd.write(str(np.sqrt(hsd**2 + zsd**2)) + "\n") + cond_print(pr, "Mean total weight: " + str(hm + zm)) + + ### OUTGOING FROM I_AB ########################### + cond_print(pr, "--------------------------------") + cond_print(pr, "I_AB -> I_ABC:") + hm = meanEarlyPhaseWeight(I_AB, I_ABC, pr) + hsd = sdEarlyPhaseWeight(I_AB, I_ABC, pr) + zm = meanLatePhaseWeight(I_AB, I_ABC, pr) + zsd = sdLatePhaseWeight(I_AB, I_ABC, pr) + f.write(str(hm + zm) + "\t\t") + fsd.write(str(np.sqrt(hsd**2 + zsd**2)) + "\t\t") + cond_print(pr, "Mean total weight: " + str(hm + zm)) + + cond_print(pr, "--------------------------------") + cond_print(pr, "I_AB -> I_AC:") + hm = meanEarlyPhaseWeight(I_AB, I_AC, pr) + hsd = sdEarlyPhaseWeight(I_AB, I_AC, pr) + zm = meanLatePhaseWeight(I_AB, I_AC, pr) + zsd = sdLatePhaseWeight(I_AB, I_AC, pr) + f.write(str(hm + zm) + "\t\t") + fsd.write(str(np.sqrt(hsd**2 + zsd**2)) + "\t\t") + cond_print(pr, "Mean total weight: " + str(hm + zm)) + + cond_print(pr, "--------------------------------") + cond_print(pr, "I_AB -> ~A:") + hm = meanEarlyPhaseWeight(I_AB, exA, pr) + hsd = sdEarlyPhaseWeight(I_AB, exA, pr) + zm = meanLatePhaseWeight(I_AB, exA, pr) + zsd = sdLatePhaseWeight(I_AB, exA, pr) + f.write(str(hm + zm) + "\t\t") + fsd.write(str(np.sqrt(hsd**2 + zsd**2)) + "\t\t") + cond_print(pr, "Mean total weight: " + str(hm + zm)) + + cond_print(pr, "--------------------------------") + cond_print(pr, "I_AB -> I_AB:") + hm = meanEarlyPhaseWeight(I_AB, I_AB, pr) + hsd = sdEarlyPhaseWeight(I_AB, I_AB, pr) + zm = meanLatePhaseWeight(I_AB, I_AB, pr) + zsd = sdLatePhaseWeight(I_AB, I_AB, pr) + f.write(str(hm + zm) + "\t\t") + fsd.write(str(np.sqrt(hsd**2 + zsd**2)) + "\t\t") + cond_print(pr, "Mean total weight: " + str(hm + zm)) + + cond_print(pr, "--------------------------------") + cond_print(pr, "I_AB -> ~B:") + hm = meanEarlyPhaseWeight(I_AB, exB, pr) + hsd = sdEarlyPhaseWeight(I_AB, exB, pr) + zm = meanLatePhaseWeight(I_AB, exB, pr) + zsd = sdLatePhaseWeight(I_AB, exB, pr) + f.write(str(hm + zm) + "\t\t") + fsd.write(str(np.sqrt(hsd**2 + zsd**2)) + "\t\t") + cond_print(pr, "Mean total weight: " + str(hm + zm)) + + cond_print(pr, "--------------------------------") + cond_print(pr, "I_AB -> I_BC:") + hm = meanEarlyPhaseWeight(I_AB, I_BC, pr) + hsd = sdEarlyPhaseWeight(I_AB, I_BC, pr) + zm = meanLatePhaseWeight(I_AB, I_BC, pr) + zsd = sdLatePhaseWeight(I_AB, I_BC, pr) + f.write(str(hm + zm) + "\t\t") + fsd.write(str(np.sqrt(hsd**2 + zsd**2)) + "\t\t") + cond_print(pr, "Mean total weight: " + str(hm + zm)) + + cond_print(pr, "--------------------------------") + cond_print(pr, "I_AB -> ~C:") + hm = meanEarlyPhaseWeight(I_AB, exC, pr) + hsd = sdEarlyPhaseWeight(I_AB, exC, pr) + zm = meanLatePhaseWeight(I_AB, exC, pr) + zsd = sdLatePhaseWeight(I_AB, exC, pr) + f.write(str(hm + zm) + "\t\t") + fsd.write(str(np.sqrt(hsd**2 + zsd**2)) + "\t\t") + cond_print(pr, "Mean total weight: " + str(hm + zm)) + + cond_print(pr, "--------------------------------") + cond_print(pr, "I_AB -> Control:") + hm = meanEarlyPhaseWeight(I_AB, control, pr) + hsd = sdEarlyPhaseWeight(I_AB, control, pr) + zm = meanLatePhaseWeight(I_AB, control, pr) + zsd = sdLatePhaseWeight(I_AB, control, pr) + f.write(str(hm + zm) + "\n") + fsd.write(str(np.sqrt(hsd**2 + zsd**2)) + "\n") + cond_print(pr, "Mean total weight: " + str(hm + zm)) + + ### OUTGOING FROM ~B ########################### + cond_print(pr, "--------------------------------") + cond_print(pr, "~B -> I_ABC:") + hm = meanEarlyPhaseWeight(exB, I_ABC, pr) + hsd = sdEarlyPhaseWeight(exB, I_ABC, pr) + zm = meanLatePhaseWeight(exB, I_ABC, pr) + zsd = sdLatePhaseWeight(exB, I_ABC, pr) + f.write(str(hm + zm) + "\t\t") + fsd.write(str(np.sqrt(hsd**2 + zsd**2)) + "\t\t") + cond_print(pr, "Mean total weight: " + str(hm + zm)) + + cond_print(pr, "--------------------------------") + cond_print(pr, "~B -> I_AC:") + hm = meanEarlyPhaseWeight(exB, I_AC, pr) + hsd = sdEarlyPhaseWeight(exB, I_AC, pr) + zm = meanLatePhaseWeight(exB, I_AC, pr) + zsd = sdLatePhaseWeight(exB, I_AC, pr) + f.write(str(hm + zm) + "\t\t") + fsd.write(str(np.sqrt(hsd**2 + zsd**2)) + "\t\t") + cond_print(pr, "Mean total weight: " + str(hm + zm)) + + cond_print(pr, "--------------------------------") + cond_print(pr, "~B -> ~A:") + hm = meanEarlyPhaseWeight(exB, exA, pr) + hsd = sdEarlyPhaseWeight(exB, exA, pr) + zm = meanLatePhaseWeight(exB, exA, pr) + zsd = sdLatePhaseWeight(exB, exA, pr) + f.write(str(hm + zm) + "\t\t") + fsd.write(str(np.sqrt(hsd**2 + zsd**2)) + "\t\t") + cond_print(pr, "Mean total weight: " + str(hm + zm)) + + cond_print(pr, "--------------------------------") + cond_print(pr, "~B -> I_AB:") + hm = meanEarlyPhaseWeight(exB, I_AB, pr) + hsd = sdEarlyPhaseWeight(exB, I_AB, pr) + zm = meanLatePhaseWeight(exB, I_AB, pr) + zsd = sdLatePhaseWeight(exB, I_AB, pr) + f.write(str(hm + zm) + "\t\t") + fsd.write(str(np.sqrt(hsd**2 + zsd**2)) + "\t\t") + cond_print(pr, "Mean total weight: " + str(hm + zm)) + + cond_print(pr, "--------------------------------") + cond_print(pr, "~B -> ~B:") + hm = meanEarlyPhaseWeight(exB, exB, pr) + hsd = sdEarlyPhaseWeight(exB, exB, pr) + zm = meanLatePhaseWeight(exB, exB, pr) + zsd = sdLatePhaseWeight(exB, exB, pr) + f.write(str(hm + zm) + "\t\t") + fsd.write(str(np.sqrt(hsd**2 + zsd**2)) + "\t\t") + cond_print(pr, "Mean total weight: " + str(hm + zm)) + + cond_print(pr, "--------------------------------") + cond_print(pr, "~B -> I_BC:") + hm = meanEarlyPhaseWeight(exB, I_BC, pr) + hsd = sdEarlyPhaseWeight(exB, I_BC, pr) + zm = meanLatePhaseWeight(exB, I_BC, pr) + zsd = sdLatePhaseWeight(exB, I_BC, pr) + f.write(str(hm + zm) + "\t\t") + fsd.write(str(np.sqrt(hsd**2 + zsd**2)) + "\t\t") + cond_print(pr, "Mean total weight: " + str(hm + zm)) + + cond_print(pr, "--------------------------------") + cond_print(pr, "~B -> ~C:") + hm = meanEarlyPhaseWeight(exB, exC, pr) + hsd = sdEarlyPhaseWeight(exB, exC, pr) + zm = meanLatePhaseWeight(exB, exC, pr) + zsd = sdLatePhaseWeight(exB, exC, pr) + f.write(str(hm + zm) + "\t\t") + fsd.write(str(np.sqrt(hsd**2 + zsd**2)) + "\t\t") + cond_print(pr, "Mean total weight: " + str(hm + zm)) + + cond_print(pr, "--------------------------------") + cond_print(pr, "~B -> Control:") + hm = meanEarlyPhaseWeight(exB, control, pr) + hsd = sdEarlyPhaseWeight(exB, control, pr) + zm = meanLatePhaseWeight(exB, control, pr) + zsd = sdLatePhaseWeight(exB, control, pr) + f.write(str(hm + zm) + "\n") + fsd.write(str(np.sqrt(hsd**2 + zsd**2)) + "\n") + cond_print(pr, "Mean total weight: " + str(hm + zm)) + + ### OUTGOING FROM I_BC ########################### + cond_print(pr, "--------------------------------") + cond_print(pr, "I_BC -> I_ABC:") + hm = meanEarlyPhaseWeight(I_BC, I_ABC, pr) + hsd = sdEarlyPhaseWeight(I_BC, I_ABC, pr) + zm = meanLatePhaseWeight(I_BC, I_ABC, pr) + zsd = sdLatePhaseWeight(I_BC, I_ABC, pr) + f.write(str(hm + zm) + "\t\t") + fsd.write(str(np.sqrt(hsd**2 + zsd**2)) + "\t\t") + cond_print(pr, "Mean total weight: " + str(hm + zm)) + + cond_print(pr, "--------------------------------") + cond_print(pr, "I_BC -> I_AC:") + hm = meanEarlyPhaseWeight(I_BC, I_AC, pr) + hsd = sdEarlyPhaseWeight(I_BC, I_AC, pr) + zm = meanLatePhaseWeight(I_BC, I_AC, pr) + zsd = sdLatePhaseWeight(I_BC, I_AC, pr) + f.write(str(hm + zm) + "\t\t") + fsd.write(str(np.sqrt(hsd**2 + zsd**2)) + "\t\t") + cond_print(pr, "Mean total weight: " + str(hm + zm)) + + cond_print(pr, "--------------------------------") + cond_print(pr, "I_BC -> ~A:") + hm = meanEarlyPhaseWeight(I_BC, exA, pr) + hsd = sdEarlyPhaseWeight(I_BC, exA, pr) + zm = meanLatePhaseWeight(I_BC, exA, pr) + zsd = sdLatePhaseWeight(I_BC, exA, pr) + f.write(str(hm + zm) + "\t\t") + fsd.write(str(np.sqrt(hsd**2 + zsd**2)) + "\t\t") + cond_print(pr, "Mean total weight: " + str(hm + zm)) + + cond_print(pr, "--------------------------------") + cond_print(pr, "I_BC -> I_AB:") + hm = meanEarlyPhaseWeight(I_BC, I_AB, pr) + hsd = sdEarlyPhaseWeight(I_BC, I_AB, pr) + zm = meanLatePhaseWeight(I_BC, I_AB, pr) + zsd = sdLatePhaseWeight(I_BC, I_AB, pr) + f.write(str(hm + zm) + "\t\t") + fsd.write(str(np.sqrt(hsd**2 + zsd**2)) + "\t\t") + cond_print(pr, "Mean total weight: " + str(hm + zm)) + + cond_print(pr, "--------------------------------") + cond_print(pr, "I_BC -> ~B:") + hm = meanEarlyPhaseWeight(I_BC, exB, pr) + hsd = sdEarlyPhaseWeight(I_BC, exB, pr) + zm = meanLatePhaseWeight(I_BC, exB, pr) + zsd = sdLatePhaseWeight(I_BC, exB, pr) + f.write(str(hm + zm) + "\t\t") + fsd.write(str(np.sqrt(hsd**2 + zsd**2)) + "\t\t") + cond_print(pr, "Mean total weight: " + str(hm + zm)) + + cond_print(pr, "--------------------------------") + cond_print(pr, "I_BC -> I_BC:") + hm = meanEarlyPhaseWeight(I_BC, I_BC, pr) + hsd = sdEarlyPhaseWeight(I_BC, I_BC, pr) + zm = meanLatePhaseWeight(I_BC, I_BC, pr) + zsd = sdLatePhaseWeight(I_BC, I_BC, pr) + f.write(str(hm + zm) + "\t\t") + fsd.write(str(np.sqrt(hsd**2 + zsd**2)) + "\t\t") + cond_print(pr, "Mean total weight: " + str(hm + zm)) + + cond_print(pr, "--------------------------------") + cond_print(pr, "I_BC -> ~C:") + hm = meanEarlyPhaseWeight(I_BC, exC, pr) + hsd = sdEarlyPhaseWeight(I_BC, exC, pr) + zm = meanLatePhaseWeight(I_BC, exC, pr) + zsd = sdLatePhaseWeight(I_BC, exC, pr) + f.write(str(hm + zm) + "\t\t") + fsd.write(str(np.sqrt(hsd**2 + zsd**2)) + "\t\t") + cond_print(pr, "Mean total weight: " + str(hm + zm)) + + cond_print(pr, "--------------------------------") + cond_print(pr, "I_BC -> Control:") + hm = meanEarlyPhaseWeight(I_BC, control, pr) + hsd = sdEarlyPhaseWeight(I_BC, control, pr) + zm = meanLatePhaseWeight(I_BC, control, pr) + zsd = sdLatePhaseWeight(I_BC, control, pr) + f.write(str(hm + zm) + "\n") + fsd.write(str(np.sqrt(hsd**2 + zsd**2)) + "\n") + cond_print(pr, "Mean total weight: " + str(hm + zm)) + + + ### OUTGOING FROM ~C ########################### + cond_print(pr, "--------------------------------") + cond_print(pr, "~C -> I_ABC:") + hm = meanEarlyPhaseWeight(exC, I_ABC, pr) + hsd = sdEarlyPhaseWeight(exC, I_ABC, pr) + zm = meanLatePhaseWeight(exC, I_ABC, pr) + zsd = sdLatePhaseWeight(exC, I_ABC, pr) + f.write(str(hm + zm) + "\t\t") + fsd.write(str(np.sqrt(hsd**2 + zsd**2)) + "\t\t") + cond_print(pr, "Mean total weight: " + str(hm + zm)) + + cond_print(pr, "--------------------------------") + cond_print(pr, "~C -> I_AC:") + hm = meanEarlyPhaseWeight(exC, I_AC, pr) + hsd = sdEarlyPhaseWeight(exC, I_AC, pr) + zm = meanLatePhaseWeight(exC, I_AC, pr) + zsd = sdLatePhaseWeight(exC, I_AC, pr) + f.write(str(hm + zm) + "\t\t") + fsd.write(str(np.sqrt(hsd**2 + zsd**2)) + "\t\t") + cond_print(pr, "Mean total weight: " + str(hm + zm)) + + cond_print(pr, "--------------------------------") + cond_print(pr, "~C -> ~A:") + hm = meanEarlyPhaseWeight(exC, exA, pr) + hsd = sdEarlyPhaseWeight(exC, exA, pr) + zm = meanLatePhaseWeight(exC, exA, pr) + zsd = sdLatePhaseWeight(exC, exA, pr) + f.write(str(hm + zm) + "\t\t") + fsd.write(str(np.sqrt(hsd**2 + zsd**2)) + "\t\t") + cond_print(pr, "Mean total weight: " + str(hm + zm)) + + cond_print(pr, "--------------------------------") + cond_print(pr, "~C -> I_AB:") + hm = meanEarlyPhaseWeight(exC, I_AB, pr) + hsd = sdEarlyPhaseWeight(exC, I_AB, pr) + zm = meanLatePhaseWeight(exC, I_AB, pr) + zsd = sdLatePhaseWeight(exC, I_AB, pr) + f.write(str(hm + zm) + "\t\t") + fsd.write(str(np.sqrt(hsd**2 + zsd**2)) + "\t\t") + cond_print(pr, "Mean total weight: " + str(hm + zm)) + + cond_print(pr, "--------------------------------") + cond_print(pr, "~C -> ~B:") + hm = meanEarlyPhaseWeight(exC, exB, pr) + hsd = sdEarlyPhaseWeight(exC, exB, pr) + zm = meanLatePhaseWeight(exC, exB, pr) + zsd = sdLatePhaseWeight(exC, exB, pr) + f.write(str(hm + zm) + "\t\t") + fsd.write(str(np.sqrt(hsd**2 + zsd**2)) + "\t\t") + cond_print(pr, "Mean total weight: " + str(hm + zm)) + + cond_print(pr, "--------------------------------") + cond_print(pr, "~C -> I_BC:") + hm = meanEarlyPhaseWeight(exC, I_BC, pr) + hsd = sdEarlyPhaseWeight(exC, I_BC, pr) + zm = meanLatePhaseWeight(exC, I_BC, pr) + zsd = sdLatePhaseWeight(exC, I_BC, pr) + f.write(str(hm + zm) + "\t\t") + fsd.write(str(np.sqrt(hsd**2 + zsd**2)) + "\t\t") + cond_print(pr, "Mean total weight: " + str(hm + zm)) + + cond_print(pr, "--------------------------------") + cond_print(pr, "~C -> ~C:") + hm = meanEarlyPhaseWeight(exC, exC, pr) + hsd = sdEarlyPhaseWeight(exC, exC, pr) + zm = meanLatePhaseWeight(exC, exC, pr) + zsd = sdLatePhaseWeight(exC, exC, pr) + f.write(str(hm + zm) + "\t\t") + fsd.write(str(np.sqrt(hsd**2 + zsd**2)) + "\t\t") + cond_print(pr, "Mean total weight: " + str(hm + zm)) + + cond_print(pr, "--------------------------------") + cond_print(pr, "~C -> Control:") + hm = meanEarlyPhaseWeight(exC, control, pr) + hsd = sdEarlyPhaseWeight(exC, control, pr) + zm = meanLatePhaseWeight(exC, control, pr) + zsd = sdLatePhaseWeight(exC, control, pr) + f.write(str(hm + zm) + "\n") + fsd.write(str(np.sqrt(hsd**2 + zsd**2)) + "\n") + cond_print(pr, "Mean total weight: " + str(hm + zm)) + + ### OUTGOING FROM Control ########################### + cond_print(pr, "--------------------------------") + cond_print(pr, "Control -> I_ABC:") + hm = meanEarlyPhaseWeight(control, I_ABC, pr) + hsd = sdEarlyPhaseWeight(control, I_ABC, pr) + zm = meanLatePhaseWeight(control, I_ABC, pr) + zsd = sdLatePhaseWeight(control, I_ABC, pr) + f.write(str(hm + zm) + "\t\t") + fsd.write(str(np.sqrt(hsd**2 + zsd**2)) + "\t\t") + cond_print(pr, "Mean total weight: " + str(hm + zm)) + + cond_print(pr, "--------------------------------") + cond_print(pr, "Control -> I_AC:") + hm = meanEarlyPhaseWeight(control, I_AC, pr) + hsd = sdEarlyPhaseWeight(control, I_AC, pr) + zm = meanLatePhaseWeight(control, I_AC, pr) + zsd = sdLatePhaseWeight(control, I_AC, pr) + f.write(str(hm + zm) + "\t\t") + fsd.write(str(np.sqrt(hsd**2 + zsd**2)) + "\t\t") + cond_print(pr, "Mean total weight: " + str(hm + zm)) + + cond_print(pr, "--------------------------------") + cond_print(pr, "Control -> ~A:") + hm = meanEarlyPhaseWeight(control, exA, pr) + hsd = sdEarlyPhaseWeight(control, exA, pr) + zm = meanLatePhaseWeight(control, exA, pr) + zsd = sdLatePhaseWeight(control, exA, pr) + f.write(str(hm + zm) + "\t\t") + fsd.write(str(np.sqrt(hsd**2 + zsd**2)) + "\t\t") + cond_print(pr, "Mean total weight: " + str(hm + zm)) + + cond_print(pr, "--------------------------------") + cond_print(pr, "Control -> I_AB:") + hm = meanEarlyPhaseWeight(control, I_AB, pr) + hsd = sdEarlyPhaseWeight(control, I_AB, pr) + zm = meanLatePhaseWeight(control, I_AB, pr) + zsd = sdLatePhaseWeight(control, I_AB, pr) + f.write(str(hm + zm) + "\t\t") + fsd.write(str(np.sqrt(hsd**2 + zsd**2)) + "\t\t") + cond_print(pr, "Mean total weight: " + str(hm + zm)) + + cond_print(pr, "--------------------------------") + cond_print(pr, "Control -> ~B:") + hm = meanEarlyPhaseWeight(control, exB, pr) + hsd = sdEarlyPhaseWeight(control, exB, pr) + zm = meanLatePhaseWeight(control, exB, pr) + zsd = sdLatePhaseWeight(control, exB, pr) + f.write(str(hm + zm) + "\t\t") + fsd.write(str(np.sqrt(hsd**2 + zsd**2)) + "\t\t") + cond_print(pr, "Mean total weight: " + str(hm + zm)) + + cond_print(pr, "--------------------------------") + cond_print(pr, "Control -> I_BC:") + hm = meanEarlyPhaseWeight(control, I_BC, pr) + hsd = sdEarlyPhaseWeight(control, I_BC, pr) + zm = meanLatePhaseWeight(control, I_BC, pr) + zsd = sdLatePhaseWeight(control, I_BC, pr) + f.write(str(hm + zm) + "\t\t") + fsd.write(str(np.sqrt(hsd**2 + zsd**2)) + "\t\t") + cond_print(pr, "Mean total weight: " + str(hm + zm)) + + cond_print(pr, "--------------------------------") + cond_print(pr, "Control -> ~C:") + hm = meanEarlyPhaseWeight(control, exC, pr) + hsd = sdEarlyPhaseWeight(control, exC, pr) + zm = meanLatePhaseWeight(control, exC, pr) + zsd = sdLatePhaseWeight(control, exC, pr) + f.write(str(hm + zm) + "\t\t") + fsd.write(str(np.sqrt(hsd**2 + zsd**2)) + "\t\t") + cond_print(pr, "Mean total weight: " + str(hm + zm)) + + cond_print(pr, "--------------------------------") + cond_print(pr, "Control -> Control:") + hm = meanEarlyPhaseWeight(control, control, pr) + hsd = sdEarlyPhaseWeight(control, control, pr) + zm = meanLatePhaseWeight(control, control, pr) + zsd = sdLatePhaseWeight(control, control, pr) + f.write(str(hm + zm) + "\n") + fsd.write(str(np.sqrt(hsd**2 + zsd**2)) + "\n") + cond_print(pr, "Mean total weight: " + str(hm + zm)) + + f.close() + fsd.close() + + +# printMeanWeightsSingleCA +# Computes and prints mean and standard deviation of CA, outgoing, incoming, and control weight in units of nC +# ts: timestamp of the data file to read +# time_for_readout: the time of the data file to read +# N_pop: the number of neurons in the considered population +# core: array of indices of the cell assembly (core) neurons +def printMeanWeightsSingleCA(ts, time_for_readout, core, N_pop): + + all = np.arange(N_pop) # the whole considered population + noncore = all[np.logical_not(np.in1d(all, core))] # the neurons outside the core -# connectionsToCore -# Prints and returns all the core neurons to which neuron i provides input -# i: neuron index -# pr [optional]: specifies if result is printed -# return: array of neuron numbers -def connectionsToCore(i, pr = True): - global adj - ctc = core[np.in1d(core, outgoingConnections(i, False))].flatten() - if pr: - print("Connections from neuron " + str(i) + " to core (" + str(len(ctc)) + "): \n" + str(ctc)) - - return ctc - -# connectionsToRSCore -# Prints and returns all the recall-stimulated core neurons to which neuron i provides input -# i: neuron index -# pr [optional]: specifies if result is printed -# return: array of neuron numbers -def connectionsToRSCore(i, pr = True): - global adj - ctrsc = core_recall[np.in1d(core_recall, outgoingConnections(i, False))].flatten() - if pr: - print("Connections from neuron " + str(i) + " to recall-stim. core (" + str(len(ctrsc)) + "): \n" + str(ctrsc)) - - return ctrsc - -# connectionsToNonCore -# Prints and returns all the non-core neurons to which neuron i provides input -# i: neuron index -# pr [optional]: specifies if result is printed -# return: array of neuron numbers -def connectionsToNonCore(i, pr = True): - global adj - outg = outgoingConnections(i, False) - ctnc = outg[np.negative(np.in1d(outg, core))] - if pr: - print("Connections from neuron " + str(i) + " to non-core neurons (" + str(len(ctnc)) + "): \n" + str(ctnc)) - - return ctnc - -# setRhombCore -# Sets a rhomb-shaped core depending on given center and radius -# core_center: the central neuron of the rhomb -# core_radius: the "radius" of the rhomb -def setRhombCore(core_center, core_radius): - global core - - core = np.array([], dtype=np.int32) - core_size = 2*core_radius**2 + 2*core_radius + 1 - - for i in range(-core_radius, core_radius+1, 1): - num_cols = (core_radius-abs(i)) - - for j in range(-num_cols, num_cols+1, 1): - core = np.append(core, np.array([core_center+i*Nl+j])) + print("##############################################") + print("At time", time_for_readout) + loadWeightMatrix(ts + "_net_" + time_for_readout + ".txt", N_pop) -# printMeanWeights -# Prints mean and standard deviation of CA, outgoing, incoming, and control weight in units of nC -def printMeanWeights(): print("--------------------------------") print("Core -> core ('CA'):") hm = meanEarlyPhaseWeight(core) @@ -516,11 +1109,14 @@ def printMeanWeights(): sdLatePhaseWeight(noncore) print("Mean total weight: " + str(hm + zm)) + ############################################################################################## # main # Reads datasets from two simulations and computes mean CA, outgoing, incoming and control weights (early- and late-phase) +# as for Luboeinski and Tetzlaff, 2021 (https://doi.org/10.1038/s42003-021-01778-y) # argv[]: timestamps of two simulations +# example call from shell: python3 adjacencyFunctions.py "19-11-28_21-07-55" "19-11-28_22-10-17" if __name__ == "__main__": if len(sys.argv) < 3: @@ -530,22 +1126,21 @@ def printMeanWeights(): ts1 = str(sys.argv[1]) # timestamp for simulation data before consolidation ts2 = str(sys.argv[2]) # timestamp for simulation data after consolidation + core = np.arange(150) # define the cell assembly core + N_pop = 1600 + print("##############################################") print("Before 10s-recall:") - loadWeightMatrix(ts1 + "_net_20.0.txt") - printMeanWeights() + printMeanWeightsSingleCA(ts1, "20.0", core, N_pop) print("##############################################") print("After 10s-recall:") - loadWeightMatrix(ts1 + "_net_20.1.txt") - printMeanWeights() + printMeanWeightsSingleCA(ts1, "20.1", core, N_pop) print("##############################################") print("Before 8h-recall:") - loadWeightMatrix(ts2 + "_net_28810.0.txt") - printMeanWeights() + printMeanWeightsSingleCA(ts2, "28810.0", core, N_pop) print("##############################################") print("After 8h-recall:") - loadWeightMatrix(ts2 + "_net_28810.1.txt") - printMeanWeights() + printMeanWeightsSingleCA(ts2, "28810.1", core, N_pop) diff --git a/analysis/analyzeWeights.py b/analysis/analyzeWeights.py new file mode 100755 index 0000000..27e633a --- /dev/null +++ b/analysis/analyzeWeights.py @@ -0,0 +1,78 @@ +################################################################################# +### Script to analyze the weight structure in a network (weight distributions ### +### and mean weight within and between subpopulations) ### +################################################################################# + +### Copyright 2020-2021 Jannik Luboeinski +### licensed under Apache-2.0 (http://www.apache.org/licenses/LICENSE-2.0) + +### example call from shell: python3 analyzeWeights.py "Weight Distributions and Mean Weight Matrix" "OVERLAP10 no AC, no ABC" + +import valueDistributions as vd +import adjacencyFunctions as adj +from overlapParadigms import * +from utilityFunctions import * +import sys + +############################################################################################## +### initialize +N_pop = 2500 # number of neurons in the considered population +core_size = 600 # number of excitatory neurons in one cell assembly +MWM = False # specifies whether to create abstract mean weight matrix +MCW = False # specifies whether to create file with mean core weights +WD = False # specifies whether to create weight distribution plots + +if len(sys.argv) < 3: # if there are less than 2 commandline arguments + print("No argument provided! Running the default routine. Paradigm may be wrong.") + WD = True + MWM = True + paradigm = "NOOVERLAP" +else: + # read strings from argument 2, telling what analyses to perform + if "Weight Distributions" in str(sys.argv[1]): + WD = True + if "Mean Weight Matrix" in str(sys.argv[1]): + MWM = True + if "Mean Core Weights" in str(sys.argv[1]): + MCW = True + + # read argument 3, telling what paradigm to consider + paradigm = str(sys.argv[2]) + +print("Paradigm:", paradigm) + +############################################################################################## +### get cell assembly definitions +try: + coreA, coreB, coreC = coreDefinitions(paradigm, core_size) +except: + raise + +############################################################################################## +### look for network output files in this directory +rawpaths = Path(".") + +for x in sorted(rawpaths.iterdir()): + + full_path = str(x) + hpath = os.path.split(full_path)[0] # take head + tpath = os.path.split(full_path)[1] # take tail + + if not x.is_dir(): + + if hasTimestamp(tpath) and "_net_" in tpath and not "_av_" in tpath and not ".png" in tpath: + timestamp = tpath.split("_net_")[0] + time_for_readout = tpath.split("_net_")[1].split(".txt")[0] + + if WD: + print("Plotting weight distributions from dataset", timestamp, "with time", time_for_readout) + N_pop_row = int(round(np.sqrt(N_pop))) + vd.plotWeightDistributions3CAs(".", timestamp, "", N_pop_row, time_for_readout, coreA, coreB, coreC) + + if MWM: + print("Creating abstract mean weight matrix from dataset", timestamp, "with time", time_for_readout) + adj.meanWeightMatrix(timestamp, time_for_readout, coreA, coreB, coreC, N_pop, pr = True) + + if MCW: + print("Computing mean core weights from dataset", timestamp, "with time", time_for_readout) + adj.meanCoreWeights(timestamp, time_for_readout, coreA, coreB, coreC, N_pop, pr = True) diff --git a/analysis/assemblyAvalancheStatistics.py b/analysis/assemblyAvalancheStatistics.py new file mode 100755 index 0000000..07393ea --- /dev/null +++ b/analysis/assemblyAvalancheStatistics.py @@ -0,0 +1,402 @@ +########################################################################################################################### +### Script to extract the time series of avalanches in cell assemblies and the whole population from spike raster data, ### +### and to compute the likelihoods of avalanche occurrence and "transition", ### +### as well as the mean firing rates in the assemblies ### +########################################################################################################################### + +### Copyright (c) Jannik Luboeinski 2020-2021 +### License: Apache-2.0 (http://www.apache.org/licenses/LICENSE-2.0) +### Contact: jannik.lubo[at]gmx.de + +import numpy as np +import os +import time +import sys +from pathlib import Path +from shutil import copy2 +from overlapParadigms import * +from utilityFunctions import * + +# main properties (most can be adjusted via commandline parameters, see at the end of the script) +paradigm = "NOOVERLAP" # paradigm of overlaps between assemblies +period_duration = 0.01 # binning period (in units of seconds) +n_thresh = 10 # number of spikes in a binning period to consider them an avalanche +new_plots = True # defines if new spike raster plots shall be created using gnuplot +exc_pop_size = 2500 # number of neurons in the excitatory population +core_size = 600 # total size of one cell assembly + +# cell assemblies +coreA, coreB, coreC = coreDefinitions(paradigm, core_size) + +# control population +mask_coreA = np.in1d(np.arange(exc_pop_size), coreA) +mask_coreB = np.in1d(np.arange(exc_pop_size), coreB) +mask_coreC = np.in1d(np.arange(exc_pop_size), coreC) +ctrl = np.arange(exc_pop_size)[np.logical_not(np.logical_or(mask_coreA, np.logical_or(mask_coreB, mask_coreC)))] # control neurons (neurons that are not within a cell assembly) +ctrl_size = len(ctrl) + +#################################### +# timeSeries +# Computes the time series of avalanche occurrence and saves it to a file +# timestamp: the timestamp of the simulation data +# spike_raster_file: the name of the spike raster file +# output_dir: relative path to the output directory +# return: the time series as a list of characters +def timeSeries(timestamp, spike_raster_file, output_dir): + + t0 = time.time() + + # read the last line and compute number of periods + with open(spike_raster_file, 'rb') as f: + f.seek(-2, os.SEEK_END) + while f.read(1) != b'\n': # seek last line + f.seek(-2, os.SEEK_CUR) + last_line = f.readline().decode() + num_periods_tot = np.int(np.double(last_line.split('\t\t')[0]) / period_duration) + 1 + + # count lines + with open(spike_raster_file) as f: + num_rows = sum(1 for _ in f) + print("num_rows =", num_rows) + + # counters per period for the different cell assemblies + counterA = np.zeros(num_periods_tot, dtype=np.int) + counterB = np.zeros(num_periods_tot, dtype=np.int) + counterC = np.zeros(num_periods_tot, dtype=np.int) + counterOverall = np.zeros(num_periods_tot, dtype=np.int) + counterCtrl = np.zeros(num_periods_tot, dtype=np.int) + series = ["-" for i in range(num_periods_tot)] + + # read all data + f = open(spike_raster_file) + for line in f: + row = line.split('\t\t') + t = np.double(row[0]) + n = np.int(row[1]) + + current_period = np.int(np.floor(t / period_duration)) + + if n < exc_pop_size: + counterOverall[current_period] += 1 + + if n in coreA: + counterA[current_period] += 1 + + if n in coreB: + counterB[current_period] += 1 + + if n in coreC: + counterC[current_period] += 1 + + if n in ctrl: + counterCtrl[current_period] += 1 + + f.close() + + # determine active CAs for each period and write data to file + fout = open(os.path.join(output_dir, timestamp + '_CA_time_series.txt'), 'w') + + for i in range(num_periods_tot): + fout.write(str(round((i+0.5)*period_duration,4)) + "\t\t") # write time at 1/2 of a period + + if counterOverall[i] > n_thresh: + series[i] = "o" + if counterC[i] > n_thresh: + series[i] = "C" + series[i] + if counterB[i] > n_thresh: + series[i] = "B" + series[i] + if counterA[i] > n_thresh: + series[i] = "A" + series[i] + + fout.write(series[i] + "\t\t" + str(counterA[i]) + "\t\t" + str(counterB[i]) + "\t\t" + str(counterC[i]) + "\t\t" + str(counterOverall[i]) + "\n") + + fout.close() + + time_el = round(time.time()-t0) # elapsed time in seconds + time_el_str = "Elapsed time: " + if time_el < 60: + time_el_str += str(time_el) + " s" + else: + time_el_str += str(time_el // 60) + " m " + str(time_el % 60) + " s" + print(time_el_str) + + + # write firing rates to file + fout = open(os.path.join(output_dir, timestamp + '_firing_rates.txt'), 'w') + fout.write("nu(A) = " + str(np.sum(counterA) / (num_periods_tot*period_duration) / core_size) + "\n") + fout.write("nu(B) = " + str(np.sum(counterB) / (num_periods_tot*period_duration) / core_size) + "\n") + fout.write("nu(C) = " + str(np.sum(counterC) / (num_periods_tot*period_duration) / core_size) + "\n") + fout.write("nu(ctrl) = " + str(np.sum(counterCtrl) / (num_periods_tot*period_duration) / ctrl_size) + "\n") + fout.write("core_size = " + str(core_size) + "\n") + fout.write("ctrl_size = " + str(ctrl_size) + "\n") + fout.close() + + return series + +#################################### +# transitionProbabilities +# Computes the likelihood of avalanche occurrence for each assembly, as well as and the likelihoods of "transitions"/triggering +# of assemblies, and saves the results to a file +# timestamp: the timestamp of the simulation data +# series: the time series as provided by timeSeries(...) +# output_dir: relative path to the output directory +def transitionProbabilities(timestamp, series, output_dir): + + np.seterr(divide='ignore', invalid='ignore') + + num_periods_tot = len(series) + + nA, nB, nC, nO, nN = 0, 0, 0, 0, 0 # frequencies of avalanches in different assemblies/overall + nAA, nBB, nCC, nAB, nBA, nAC, nCA, nBC, nCB = 0, 0, 0, 0, 0, 0, 0, 0, 0 # frequencies of transitions + nOO, nOA, nOB, nOC, nAO, nBO, nCO = 0, 0, 0, 0, 0, 0, 0 # frequencies of transitions involving "overall" + nAN, nBN, nCN, nON, nNA, nNB, nNC, nNO, nNN = 0, 0, 0, 0, 0, 0, 0, 0, 0 # frequencies of transitions into/from void + + for i in range(num_periods_tot-1): + + if series[i] == "-": # there is no avalanche in this period + nN += 1 + if series[i+1] == "-": + nNN += 1 + else: + nNO += 1 + if "A" in series[i+1]: + nNA += 1 + if "B" in series[i+1]: + nNB += 1 + if "C" in series[i+1]: + nNC += 1 + + else: # there is an avalanche in this period + nO += 1 + if series[i+1] == "-": + nON += 1 + else: + nOO += 1 + if "A" in series[i+1]: + nOA += 1 + if "B" in series[i+1]: + nOB += 1 + if "C" in series[i+1]: + nOC += 1 + + if "A" in series[i]: # there is an avalanche in A + nA += 1 + if series[i+1] == "-": + nAN += 1 + else: + nAO += 1 + if "A" in series[i+1]: + nAA += 1 + if "B" in series[i+1]: + nAB += 1 + if "C" in series[i+1]: + nAC += 1 + if "B" in series[i]: # there is an avalanche in B + nB += 1 + if series[i+1] == "-": + nBN += 1 + else: + nBO += 1 + if "A" in series[i+1]: + nBA += 1 + if "B" in series[i+1]: + nBB += 1 + if "C" in series[i+1]: + nBC += 1 + if "C" in series[i]: # there is an avalanche in C + nC += 1 + if series[i+1] == "-": + nCN += 1 + else: + nCO += 1 + if "A" in series[i+1]: + nCA += 1 + if "B" in series[i+1]: + nCB += 1 + if "C" in series[i+1]: + nCC += 1 + + # human-readable output + fout = open(os.path.join(output_dir, timestamp + '_CA_probabilities.txt'), 'w') + fout.write("Timestamp: " + timestamp) + fout.write("\nTotal number of periods: " + str(num_periods_tot)) + fout.write("\n\nTotal probabilities:") + fout.write("\np(A) = " + str(nA / num_periods_tot)) # likelihood of avalanche in A + fout.write("\np(B) = " + str(nB / num_periods_tot)) # likelihood of avalanche in B + fout.write("\np(C) = " + str(nC / num_periods_tot)) # likelihood of avalanche in C + fout.write("\np(overall) = " + str(nO / num_periods_tot)) # probability of avalanche in overall population + fout.write("\np(-) = " + str(nN / num_periods_tot)) # probability of no avalanche + + fout.write("\n\nTrigger probabilities:") + fout.write("\np_A(A) = " + str(np.divide(nAA, num_periods_tot))) # likelihood of triggering A + fout.write("\np_A(B) = " + str(np.divide(nAB, num_periods_tot))) # likelihood of triggering B + fout.write("\np_A(C) = " + str(np.divide(nAC, num_periods_tot))) # likelihood of triggering C + fout.write("\np_A(overall) = " + str(np.divide(nAO, nAO+nAN))) # probability of triggering something + fout.write("\np_A(-) = " + str(np.divide(nAN, nAO+nAN))) # probability of triggering nothing + + fout.write("\np_B(A) = " + str(np.divide(nBA, num_periods_tot))) # likelihood of triggering A + fout.write("\np_B(B) = " + str(np.divide(nBB, num_periods_tot))) # likelihood of triggering B + fout.write("\np_B(C) = " + str(np.divide(nBC, num_periods_tot))) # likelihood of triggering C + fout.write("\np_B(overall) = " + str(np.divide(nBO, nBO+nBN))) # probability of triggering something + fout.write("\np_B(-) = " + str(np.divide(nBN, nBO+nBN))) # probability of triggering nothing + + fout.write("\np_C(A) = " + str(np.divide(nCA, num_periods_tot))) # likelihood of triggering A + fout.write("\np_C(B) = " + str(np.divide(nCB, num_periods_tot))) # likelihood of triggering B + fout.write("\np_C(C) = " + str(np.divide(nCC, num_periods_tot))) # likelihood of triggering C + fout.write("\np_C(overall) = " + str(np.divide(nCO, nCO+nCN))) # probability of triggering something + fout.write("\np_C(-) = " + str(np.divide(nCN, nCO+nCN))) # probability of triggering nothing + + fout.write("\np_overall(A) = " + str(np.divide(nOA, num_periods_tot))) # likelihood of triggering A + fout.write("\np_overall(B) = " + str(np.divide(nOB, num_periods_tot))) # likelihood of triggering B + fout.write("\np_overall(C) = " + str(np.divide(nOC, num_periods_tot))) # likelihood of triggering C + fout.write("\np_overall(overall) = " + str(np.divide(nOO, nOO+nON))) # probability of triggering something + fout.write("\np_overall(-) = " + str(np.divide(nON, nOO+nON))) # probability of triggering nothing + + fout.write("\np_-(A) = " + str(np.divide(nNA, num_periods_tot))) # likelihood of triggering A + fout.write("\np_-(B) = " + str(np.divide(nNB, num_periods_tot))) # likelihood of triggering B + fout.write("\np_-(C) = " + str(np.divide(nNC, num_periods_tot))) # likelihood of triggering C + fout.write("\np_-(overall) = " + str(np.divide(nNO, nNO+nNN))) # probability of triggering something + fout.write("\np_-(-) = " + str(np.divide(nNN, nNO+nNN))) # probability of triggering nothing + fout.close() + + # output for facilitated machine readability + fout = open(os.path.join(output_dir, timestamp + '_CA_probabilities_raw.txt'), 'w') + fout.write(timestamp + "\n\n\n") + fout.write(str(nA / num_periods_tot) + "\n") + fout.write(str(nB / num_periods_tot) + "\n") + fout.write(str(nC / num_periods_tot) + "\n") + fout.write(str(nO / num_periods_tot) + "\n") + fout.write(str(nN / num_periods_tot) + "\n") + fout.close() + + n_transitions = sum((nOO, nON, nNO, nNN)) + if n_transitions == num_periods_tot-1: + print("Normalization check suceeeded.") + else: + print("Normalization check failed:", n_transitions, "triggerings found, as compared to", num_periods_tot-1, "expected.") + +#################################### +# spikeRasterPlot +# Creates two spike raster plots in the data directory and copies them to the output directory +# timestamp: the timestamp of the data +# data_dir: the directory containing the simulation data +# output_dir: relative path to the output directory +# new_plots: specifies if new plots shall be created +def spikeRasterPlot(timestamp, data_dir, output_dir, new_plots): + + plot_file1 = timestamp + "_spike_raster.png" + plot_file2 = timestamp + "_spike_raster2.png" + + work_dir = os.getcwd() # get the current working directory + if data_dir == "": + os.chdir(".") + else: + os.chdir(data_dir) # change to data directory + + if new_plots: + fout = open("spike_raster.gpl", "w") + fout.write("set term png enhanced font Sans 20 size 1280,960 lw 2.5\n") + fout.write("set output '" + plot_file1 + "'\n\n") + fout.write("Ne = 2500\n") + fout.write("Ni = 625\n") + fout.write("set xlabel 'Time (s)'\n") + fout.write("unset ylabel\n") + fout.write("set yrange [0:1]\n") + fout.write("set ytics out ('#0' 0.05, '#625' 0.23, '#1250' 0.41, '#1875' 0.59, '#2500' 0.77)\n") + fout.write("plot [x=100:120] '" + timestamp + "_spike_raster.txt' using 1:($2 < Ne ? (0.9*$2/(Ne+Ni) + 0.05) : 1/0) notitle with dots lc 'blue', \\\n") + fout.write(" '" + timestamp + "_spike_raster.txt' using 1:($2 >= Ne ? (0.9*$2/(Ne+Ni) + 0.05) : 1/0) notitle with dots lc 'red'\n\n") + fout.write("###########################################\n") + fout.write("set output '" + plot_file2 + "'\n") + fout.write("plot [x=100:180] '" + timestamp + "_spike_raster.txt' using 1:($2 < Ne ? (0.9*$2/(Ne+Ni) + 0.05) : 1/0) notitle with dots lc 'blue', \\\n") + fout.write(" '" + timestamp + "_spike_raster.txt' using 1:($2 >= Ne ? (0.9*$2/(Ne+Ni) + 0.05) : 1/0) notitle with dots lc 'red'\n") + fout.close() + + os.system("gnuplot spike_raster.gpl") + + if os.path.exists(plot_file1) and os.path.exists(plot_file2): + copy2(plot_file1, os.path.join(work_dir, output_dir)) # copy spike raster plot #1 to output directory + copy2(plot_file2, os.path.join(work_dir, output_dir)) # copy spike raster plot #2 to output directory + else: + print("Warning: " + data_dir + ": plot files not found.") + + os.chdir(work_dir) # change back to previous working directory + + +###################################### +# dirRecursion +# Walks recursively through a directory looking for spike raster data; +# if data are found, computes time series, avalanche likelihoods (and, if specified, creates spike raster plots) +# directory: the directory to consider +# output_dir: relative path to the output directory +# new_plots: specifies if new plots shall be created +def dirRecursion(directory, output_dir, new_plots): + + rawpaths = Path(directory) + + print("Reading directory " + directory) + rawpaths = Path(directory) + + for x in sorted(rawpaths.iterdir()): + + dest_file = "" + + full_path = str(x) + hpath = os.path.split(full_path)[0] # take head + tpath = os.path.split(full_path)[1] # take tail + + if not x.is_dir(): + + if "_spike_raster.txt" in tpath: + timestamp = tpath.split("_spike_raster.txt")[0] + series = timeSeries(timestamp, full_path, output_dir) + transitionProbabilities(timestamp, series, output_dir) + spikeRasterPlot(timestamp, hpath, output_dir, new_plots) + + params_file = os.path.join(hpath, timestamp + "_PARAMS.txt") + if os.path.exists(params_file): + copy2(params_file, output_dir) + else: + print("Warning: " + hpath + ": no parameter file found.") + + + + else: + if hasTimestamp(tpath): + dirRecursion(directory + os.sep + tpath, output_dir, new_plots) + + +############################################### +# main: + +### example call from shell: python3 assemblyAvalancheStatistics.py "OVERLAP10 no AC, no ABC" 0.01 10 False + +if len(sys.argv) > 1: # if there is at least one additional commandline arguments + paradigm = sys.argv[1] + try: + coreA, coreB, coreC = coreDefinitions(paradigm, core_size) # re-define cell assemblies + except: + raise +if len(sys.argv) > 2: # if there are at least 2 additional commandline arguments + period_duration = float(sys.argv[2]) +if len(sys.argv) > 3: # if there are at least 3 additional commandline arguments + n_thresh = int(sys.argv[3]) +if len(sys.argv) > 4: # if there are at least 4 additional commandline arguments + if sys.argv[4] == "0" or sys.argv[4] == "False": + new_plots = False + print("Creation of new plots switched off") + else: + new_plots = True + +output_dir = "./avalanche_statistics_" + str(period_duration) + "_" + str(n_thresh) # output directory for analysis results + +if not os.path.exists(output_dir): + os.mkdir(output_dir) + +print("Output directory:", output_dir) +print("Paradigm:", paradigm) +print("Bin size:", str(period_duration), "s") +print("Detection threshold:", str(n_thresh)) + +dirRecursion('.', output_dir, new_plots) # walk through directories and analyze data +mergeRawData(output_dir, "_CA_probabilities_raw.txt", "all_trials_raw.txt", remove_raw=True) # merge machine-readable output + diff --git a/analysis/calculateMIa.py b/analysis/calculateMIa.py index b4e3cb0..fe99bf2 100755 --- a/analysis/calculateMIa.py +++ b/analysis/calculateMIa.py @@ -6,8 +6,8 @@ ### Copyright 2018-2021 Jannik Luboeinski ### licensed under Apache-2.0 (http://www.apache.org/licenses/LICENSE-2.0) -from readWeightData import * -from probDistributions import * +from utilityFunctions import * +from valueDistributions import * import numpy as np from pathlib import Path @@ -18,26 +18,28 @@ # the reference distribution # nppath: path to the network_plots directory to read the data from # timestamp: a string containing date and time (to access correct paths) -# Nl: the number of excitatory neurons in one line of a quadratic grid +# Nl_exc: the number of excitatory neurons in one line of a quadratic grid # time_for_activity: the time that at which the activites shall be read out (some time during recall) # time_ref: the reference time (for getting the activity distribution during learning) # core: the neurons in the cell assembly (for stipulation; only required if no activity distribution during learning is available) -def calculateMIa(nppath, timestamp, Nl, time_for_activity, time_ref = "11.0", core = np.array([])): +def calculateMIa(nppath, timestamp, Nl_exc, time_for_activity, time_ref = "11.0", core = np.array([])): if time_ref: # use reference firing rate distribution from data (for learned cell assembly) times_for_readout_list = [time_ref, time_for_activity] # the simulation times at which the activities shall be read out + print("Using reference distribution at " + time_ref + "...") else: # use model firing rate distribution (for stipulated cell assembly) times_for_readout_list = [time_for_activity] - v_model = np.zeros(Nl**2) - v_model[core] = 1 # entropy/MI of this distribution for Nl=40: 0.4689956 - v_model = np.reshape(v_model, (Nl,Nl)) + v_model = np.zeros(Nl_exc**2) + v_model[core] = 1 # entropy/MI of this distribution for Nl_exc=40: 0.4689956 + v_model = np.reshape(v_model, (Nl_exc,Nl_exc)) + print("Using stipulated reference distribution...") - connections = [np.zeros((Nl**2,Nl**2)) for x in times_for_readout_list] - h = [np.zeros((Nl**2,Nl**2)) for x in times_for_readout_list] - z = [np.zeros((Nl**2,Nl**2)) for x in times_for_readout_list] - v = [np.zeros((Nl,Nl)) for x in times_for_readout_list] + connections = [np.zeros((Nl_exc**2,Nl_exc**2)) for x in times_for_readout_list] + h = [np.zeros((Nl_exc**2,Nl_exc**2)) for x in times_for_readout_list] + z = [np.zeros((Nl_exc**2,Nl_exc**2)) for x in times_for_readout_list] + v = [np.zeros((Nl_exc,Nl_exc)) for x in times_for_readout_list] - v_array = np.zeros(Nl*Nl) # data array + v_array = np.zeros(Nl_exc*Nl_exc) # data array rawpaths = Path(nppath) @@ -58,7 +60,7 @@ def calculateMIa(nppath, timestamp, Nl, time_for_activity, time_ref = "11.0", co raise FileNotFoundError('"' + timestamp + '_net_' + time_for_readout + '.txt" was not found') try: - connections[i], h[i], z[i], v[i] = readWeightMatrixData(path, Nl) + connections[i], h[i], z[i], v[i] = readWeightMatrixData(path, Nl_exc) except ValueError: raise diff --git a/analysis/calculateQ.py b/analysis/calculateQ.py index 75c523b..77b7e67 100755 --- a/analysis/calculateQ.py +++ b/analysis/calculateQ.py @@ -17,18 +17,18 @@ # nppath: path to the network_plots directory to read the data from # timestamp: a string containing date and time (to access correct paths) # core: array of the neurons belonging to the stimulated core -# Nl: the number of excitatory neurons in one line of a quadratic grid +# Nl_exc: the number of excitatory neurons in one line of a quadratic grid # time_for_activity: the time that at whcih the activites shall be read out (some time during recall) # recall_fraction: the fraction of core neurons that are activated for recall -def calculateQ(nppath, timestamp, core, Nl, time_for_activity, recall_fraction): +def calculateQ(nppath, timestamp, core, Nl_exc, time_for_activity, recall_fraction): core_recall = core[0:int(np.floor(float(recall_fraction)*core.shape[0]))] core_norecall = core[np.logical_not(np.in1d(core, core_recall))] - control = np.delete(np.arange(Nl*Nl), core) + control = np.delete(np.arange(Nl_exc*Nl_exc), core) path = "" - v_array = np.zeros(Nl*Nl) # data array + v_array = np.zeros(Nl_exc*Nl_exc) # data array # look for data file [timestamp]_net_[time_for_activity].txt path = "" @@ -56,15 +56,15 @@ def calculateQ(nppath, timestamp, core, Nl, time_for_activity, recall_fraction): nn = len(rawdata)-1 f.close() - if nn != 2*Nl*Nl+Nl+3: - raise ValueError(str(nn) + ' instead of ' + str(2*Nl*Nl+Nl+3) + ' lines in data file "' + path + '"') + if nn != 2*Nl_exc*Nl_exc+Nl_exc+3: + raise ValueError(str(nn) + ' instead of ' + str(2*Nl_exc*Nl_exc+Nl_exc+3) + ' lines in data file "' + path + '"') - offset = 2*Nl*Nl+2 + offset = 2*Nl_exc*Nl_exc+2 for n in range(nn-1): # loop over lines if n >= offset: - n2 = (n - offset) * Nl + n2 = (n - offset) * Nl_exc line_values = rawdata[n].split("\t\t") for p in range(len(line_values)): diff --git a/analysis/extractParamsQMI.py b/analysis/extractParamsQMI.py index ecdb307..be2e7ca 100755 --- a/analysis/extractParamsQMI.py +++ b/analysis/extractParamsQMI.py @@ -6,105 +6,19 @@ ### Copyright 2018-2021 Jannik Luboeinski ### licensed under Apache-2.0 (http://www.apache.org/licenses/LICENSE-2.0) - import numpy as np import os import traceback +from utilityFunctions import * from calculateQ import * from calculateMIa import * from pathlib import Path from shutil import copyfile np.set_printoptions(threshold=1e10, linewidth=200) # extend console print range for numpy arrays -Nl = 40 # the number of excitatory neurons in one line of a quadratic grid -ref_time = "11.0" # readout time for the reference firing rate or weight distribution (typically during learning) -# hasTimestamp -# Checks if the given filename starts with a timestamp -# filename: a string -# return: true if presumably there is a timestamp, false if not -def hasTimestamp(filename): - try: - if filename[2] == "-" and filename[5] == "-" and filename[8] == "_" and \ - filename[11] == "-" and filename[14] == "-": - return True - except: - pass - - return False - -# readParams -# Reads some parameters from a "[timestamp]_PARAMS.txt" file -# path: path to the parameter file to read the data from -def readParams(path): - - try: - f = open(path) - - except IOError: - print('Error opening "' + path + '"') - exit() - - r_CA = -1 # radius of the stimulated core - s_CA = -1 - - # read activities from file and determine mean activities for different regions - rawdata = f.read() - rawdata = rawdata.split('\n') - nn = len(rawdata) - f.close() - - for i in range(nn): - segs = rawdata[i].split(' ') - - if segs[0] == "Ca_pre": - Ca_pre = float(segs[2]) - elif segs[0] == "Ca_post": - Ca_post = float(segs[2]) - elif segs[0] == "theta_p": - theta_p = float(segs[2]) - elif segs[0] == "theta_d": - theta_d = float(segs[2]) - elif segs[0] == "R_mem": - R_mem = float(segs[2]) - elif segs[0] == "learning": - if segs[3] == "": - lprot = "none" - else: - lprot = segs[3] - elif segs[0] == "stimulus": # old version of previous condition - lprot = segs[2] - elif segs[0] == "recall" and segs[1] == "stimulus": - rprot = segs[3] - elif segs[0] == "recall" and segs[1] != "fraction": # old version of previous condition - rprot = segs[2] - elif segs[0] == "recall" and segs[1] == "fraction": - recall_fraction = segs[3] - elif segs[0] == "pc" or segs[0] == "p_c": - p_c = float(segs[2]) - elif segs[0] == "w_ei": - w_ei = float(segs[2]) - elif segs[0] == "w_ie": - w_ie = float(segs[2]) - elif segs[0] == "w_ii": - w_ii = float(segs[2]) - elif segs[0] == "theta_pro_c": - theta_pro_c = float(segs[2]) - elif segs[0] == "N_stim": - N_stim = int(segs[2]) - elif segs[0] == "I_const" or segs[0] == "I_0": - I_0 = float(segs[2]) - elif segs[0] == "dt": - dt = float(segs[2]) - elif segs[0] == "core" and segs[len(segs)-2] == "radius": - r_CA = int(segs[len(segs)-1]) - elif segs[0] == "core" and (segs[2] == "first" or segs[2] == "random"): - s_CA = int(segs[3]) - - if r_CA == -1 and s_CA == -1: # is not specified in the parameter file of older data - r_CA = int(input('Enter the radius of the stimulated core: ')) - - return [w_ei, w_ie, w_ii, p_c, Ca_pre, Ca_post, theta_p, theta_d, lprot, rprot, dt, theta_pro_c, s_CA, N_stim, I_0, R_mem, recall_fraction] +Nl_exc = 40 # the number of excitatory neurons in one line of a quadratic grid +ref_time = "11.0" # readout time for the reference firing rate or weight distribution (typically during learning) # extractRecursion # Recursively looks for data directories and extracts parameters and the Q and MI measures from them @@ -173,7 +87,7 @@ def extractRecursion(directory, fout): # compute the Q value try: - Q, Q_err, v_as, v_as_err, v_ans, v_ans_err, v_ctrl, v_ctrl_err = calculateQ(full_path + os.sep + "network_plots" + os.sep, timestamp, core, Nl, readout_time, params[16]) + Q, Q_err, v_as, v_as_err, v_ans, v_ans_err, v_ctrl, v_ctrl_err = calculateQ(full_path + os.sep + "network_plots" + os.sep, timestamp, core, Nl_exc, readout_time, params[16]) except OSError as e: print(traceback.format_exc()) except ValueError as e: @@ -187,9 +101,9 @@ def extractRecursion(directory, fout): # compute the MI and selfMI value try: if "STIP" in params[8]: # stipulated CA - MI, selfMI = calculateMIa(full_path + os.sep + "network_plots" + os.sep, timestamp, Nl, readout_time, "", core) + MI, selfMI = calculateMIa(full_path + os.sep + "network_plots" + os.sep, timestamp, Nl_exc, readout_time, "", core) else: # learned CA - MI, selfMI = calculateMIa(full_path + os.sep + "network_plots" + os.sep, timestamp, Nl, readout_time, ref_time) + MI, selfMI = calculateMIa(full_path + os.sep + "network_plots" + os.sep, timestamp, Nl_exc, readout_time, ref_time) except OSError as e: print(traceback.format_exc()) except ValueError as e: diff --git a/analysis/frequencyAnalysisSpikeRaster.py b/analysis/frequencyAnalysisSpikeRaster.py new file mode 100755 index 0000000..d471c78 --- /dev/null +++ b/analysis/frequencyAnalysisSpikeRaster.py @@ -0,0 +1,61 @@ +######################################################################## +### Routine to compute the frequency spectrum of spike raster data ### +### from multiple data files ### +######################################################################## + +### Copyright 2019-2021 Jannik Luboeinski +### licensed under Apache-2.0 (http://www.apache.org/licenses/LICENSE-2.0) + +import numpy as np +import pandas as pd +import scipy +import scipy.fftpack +from scipy import pi +import matplotlib.pyplot as plt +import pylab +import os +from pathlib import Path + +dt = 0.0002 # duration of one time bin + +# search in current directory for a "*_spike_raster.txt" file +rawpaths = Path(".") +df = None +for x in sorted(rawpaths.iterdir()): + + full_path = str(x) + tpath = os.path.split(full_path)[1] # take tail + if "_spike_raster.txt" in tpath: + print("Reading", tpath) + df_new = pd.read_table(tpath, header=None, sep="\t\t", engine='python') + if df is None: + df = df_new + else: + df = df.append(df_new) + +if df is None: + print("No data found. Exiting...") + exit() + +# count the number of spikes per time bin (!!! not per 10 ms bin !!!) +spike_counts = df[df.columns[0]].value_counts().sort_index() +time_all = spike_counts.index.to_numpy() +spikes_whole_all = spike_counts.to_numpy() +print(time_all) +print(spikes_whole_all) + +# Fast Fourier Transform +FFT = abs(scipy.fft.fft(spikes_whole_all)) +freqs = scipy.fftpack.fftfreq(spikes_whole_all.size, dt) + +pylab.subplot(211) +pylab.xlabel('Time (s)') +pylab.ylabel('Spikes') +pylab.plot(time_all, spikes_whole_all, '-') +pylab.subplot(212) +pylab.xlabel('Frequency (Hz)') +pylab.ylabel('Amplitude') +plt.semilogy(freqs, FFT, '-', color="darkgreen") +pylab.show() + +#plt.savefig("frequencyAnalysis.svg") diff --git a/analysis/numberOfSpikesInBins.py b/analysis/numberOfSpikesInBins.py new file mode 100755 index 0000000..218905d --- /dev/null +++ b/analysis/numberOfSpikesInBins.py @@ -0,0 +1,148 @@ +######################################################################################### +### Routine to determine the distribution of spikes per time bin in the whole network ### +### and in different assemblies from multiple data files ### +######################################################################################### + +### Copyright 2019-2021 Jannik Luboeinski +### licensed under Apache-2.0 (http://www.apache.org/licenses/LICENSE-2.0) + +import numpy as np +import pandas as pd +from scipy import optimize +import matplotlib.pyplot as plt +import pylab +import os +from pathlib import Path + +# search in current directory for a "*_CA_time_series.txt" file (created by assemblyAvalancheStatistics.py) +rawpaths = Path(".") +df = None +for x in sorted(rawpaths.iterdir()): + + full_path = str(x) + tpath = os.path.split(full_path)[1] # take tail + if "_CA_time_series.txt" in tpath: + print("Reading", tpath) + df_new = pd.read_table(tpath, header=None, sep="\t\t", engine='python') + if df is None: + df = df_new + else: + df = df.append(df_new) + +if df is None: + print("No data found. Exiting...") + exit() + +num_pars = 2 # number of fit parameters (2: standard power-law fit, 3: power-law fit with shift) +fit_range_start = 10 # first datapoint to be used for fit +fit_range_end = 200 # last datapoint to be used to fit + +# creating histograms +valcA = df[df.columns[2]].value_counts() +print("index(A) =", valcA.index) +print("values(A) =", valcA.values) +print("quantiles for A:\n", df[df.columns[2]].quantile([.25, .5, .99])) + +valcB = df[df.columns[3]].value_counts() +print("index(B) =", valcB.index) +print("values(B) =", valcB.values) +print("quantiles for B:\n", df[df.columns[3]].quantile([.25, .5, .99])) + +valcC = df[df.columns[4]].value_counts() +print("index(C) =", valcC.index) +print("values(C) =", valcC.values) +print("quantiles for C:\n", df[df.columns[4]].quantile([.25, .5, .99])) + +df_all = df[df.columns[5]] +valcAll = df_all.value_counts() +print("index(all) =", valcAll.index) +print("values(all) =", valcAll.values) +print("quantiles for all:\n", df_all.quantile([.25, .5, .99])) + +# plotting and fitting preparations +fig, (ax1, ax2, ax3) = plt.subplots(1, 3) +fig.set_size_inches(10,10) +ax1 = plt.subplot2grid((6, 4), (0, 0), colspan=1, rowspan=6) +ax2 = plt.subplot2grid((6, 4), (0, 1), colspan=1, rowspan=6, sharey=ax1) +ax3 = plt.subplot2grid((6, 4), (0, 2), colspan=1, rowspan=6, sharey=ax1) +ax4 = plt.subplot2grid((6, 4), (0, 3), colspan=1, rowspan=6, sharey=ax1) +fig.suptitle('Distribution of spikes per assembly in one bin') +if num_pars == 2: + fitfunc = lambda par, x: par[0] / (x**par[1]) + par0 = [12000, 1.55] # initial guess +else: + fitfunc = lambda par, x: par[0] / (x**par[1] + par[2]) + par0 = [7000, 1.55, 1] # initial guess +errfunc = lambda par, x, y: fitfunc(par, x) - y # distance to the target function + +# fitting histogram for A +if num_pars == 2: + x_A = valcA.index[np.logical_and(valcA.index>=fit_range_start, valcA.index<=fit_range_end)] + y_A = valcA.values[np.logical_and(valcA.index>=fit_range_start, valcA.index<=fit_range_end)] +else: + x_A = valcA.index + y_A = valcA.values +fit_result_A = optimize.least_squares(errfunc, par0[:], args=(x_A, y_A)) +print("Fit parameters for A:", fit_result_A.x, ", success:", fit_result_A.success, \ + ", optimality:", fit_result_A.optimality, ", residual:", np.sum(np.square(fit_result_A.fun))/fit_result_A.fun.shape[0]) + +# fitting histogram for B +if num_pars == 2: + x_B = valcB.index[np.logical_and(valcB.index>=fit_range_start, valcB.index<=fit_range_end)] + y_B = valcB.values[np.logical_and(valcB.index>=fit_range_start, valcB.index<=fit_range_end)] +else: + x_B = valcB.index + y_B = valcB.values +fit_result_B = optimize.least_squares(errfunc, par0[:], args=(x_B, y_B)) +print("Fit parameters for B:", fit_result_B.x, ", success:", fit_result_B.success, \ + ", optimality:", fit_result_B.optimality, ", residual:", np.sum(np.square(fit_result_B.fun))/fit_result_B.fun.shape[0]) + +# fitting histogram for C +if num_pars == 2: + x_C = valcC.index[np.logical_and(valcC.index>=fit_range_start, valcC.index<=fit_range_end)] + y_C = valcC.values[np.logical_and(valcC.index>=fit_range_start, valcC.index<=fit_range_end)] +else: + x_C = valcC.index + y_C = valcC.values +fit_result_C = optimize.least_squares(errfunc, par0[:], args=(x_C, y_C)) +print("Fit parameters for C:", fit_result_C.x, ", success:", fit_result_C.success, \ + ", optimality:", fit_result_C.optimality, ", residual:", np.sum(np.square(fit_result_C.fun))/fit_result_C.fun.shape[0]) + +# fitting histogram for all +if num_pars == 2: + x_all = valcAll.index[np.logical_and(valcAll.index>=fit_range_start, valcAll.index<=fit_range_end)] + y_all = valcAll.values[np.logical_and(valcAll.index>=fit_range_start, valcAll.index<=fit_range_end)] +else: + x_all = valcAll.index + y_all = valcAll.values +fit_result_all = optimize.least_squares(errfunc, par0[:], args=(x_all, y_all)) +print("Fit parameters for C:", fit_result_all.x, ", success:", fit_result_all.success, \ + ", optimality:", fit_result_all.optimality, ", residual:", np.sum(np.square(fit_result_all.fun))/fit_result_all.fun.shape[0]) + +# plotting +range_x = np.linspace(1, np.amax([valcA.index.max(), valcB.index.max(), valcC.index.max()]), 2000) +range_x_all = np.linspace(1, valcAll.index.max(), 2000) + +ax1.text(1, 0.35, r"$\gamma$ = " + str(round(fit_result_A.x[1],2))) +ax2.text(1, 0.35, r"$\gamma$ = " + str(round(fit_result_B.x[1],2))) +ax3.text(1, 0.35, r"$\gamma$ = " + str(round(fit_result_C.x[1],2))) +ax4.text(1, 0.35, r"$\gamma$ = " + str(round(fit_result_all.x[1],2))) + +ax1.set_xlabel('Number of spikes') +ax1.set_ylabel('Frequency') +ax2.set_xlabel('Number of spikes') +ax3.set_xlabel('Number of spikes') +ax4.set_xlabel('Number of spikes') +ax1.set_ylim(0.1, 1e5) + +ax1.loglog([df[df.columns[2]].quantile([.99]), df[df.columns[2]].quantile([.99])], [0.1, 1e5], "-", color="#cccccc", dashes=[6, 2]) +plot1d,plot1f = ax1.loglog(valcA.index, valcA.values, "ro", range_x, fitfunc(fit_result_A.x, range_x), "-", color="#004586", label='A') # plot of the data and the fit +ax2.loglog([df[df.columns[3]].quantile([.99]), df[df.columns[3]].quantile([.99])], [0.1, 1e5], "-", color="#cccccc", dashes=[6, 2]) +plot2d,plot2f = ax2.loglog(valcB.index, valcB.values, "ro", range_x, fitfunc(fit_result_B.x, range_x), "-", color="#ff420e", label='B') # plot of the data and the fit +ax3.loglog([df[df.columns[4]].quantile([.99]), df[df.columns[4]].quantile([.99])], [0.1, 1e5], "-", color="#cccccc", dashes=[6, 2]) +plot3d,plot3f = ax3.loglog(valcC.index, valcC.values, "ro", range_x, fitfunc(fit_result_C.x, range_x), "-", color="#ffd320", label='C') # plot of the data and the fit +ax4.loglog([df[df.columns[5]].median(), df[df.columns[5]].median()], [0.1, 1e5], "-", color="#333333", dashes=[6, 2]) +plot4d,plot4f = ax4.loglog(valcAll.index, valcAll.values, "ro", range_x_all, fitfunc(fit_result_all.x, range_x_all), "-", color="#111111", label='Overall') # plot of the data and the fit + +#plt.savefig("numberOfSpikesInBins.svg") +plt.show() diff --git a/analysis/overlapParadigms.py b/analysis/overlapParadigms.py new file mode 100755 index 0000000..257f431 --- /dev/null +++ b/analysis/overlapParadigms.py @@ -0,0 +1,103 @@ +############################################################## +### Definitions of assembly neurons in different paradigms ### +### of overlap between cell assemblies ### +############################################################## + +### Copyright 2020-2021 Jannik Luboeinski +### licensed under Apache-2.0 (http://www.apache.org/licenses/LICENSE-2.0) + +import numpy as np + +# coreDefinitions +# Returns the neuron numbers belonging to each of three assemblies in a given paradigm +# paradigm: name of the paradigm to consider +# core_size: the size of one assembly +# return: the three assemblies +def coreDefinitions(paradigm, core_size = 600): + + # full_overlap + # Returns the neuron number belonging to each of three assemblies with equal overlaps + # overlap: overlap between each two cell assemblies (0.1 equals "OVERLAP10" and so on) + # return: the three assemblies + def full_overlap(overlap): + tot_wo_overlap = 1-overlap + half_overlap = overlap / 2 + tot2_wo_overlap_oh = 2-overlap-half_overlap + core1 = np.arange(core_size) + core2 = np.arange(int(np.round(tot_wo_overlap*core_size)), int(np.round(tot_wo_overlap*core_size))+core_size) + core3 = np.concatenate(( np.arange(int(np.round(tot2_wo_overlap_oh*core_size)), int(np.round(tot2_wo_overlap_oh*core_size))+int(np.round(tot_wo_overlap*core_size))), \ + np.arange(int(np.round(half_overlap*core_size))), \ + np.arange(int(np.round(tot_wo_overlap*core_size)), int(np.round(tot_wo_overlap*core_size))+int(np.round(half_overlap*core_size))) )) + + return (core1, core2, core3) + + # no_ABC_overlap + # Returns the neuron number belonging to each of three assemblies with equal overlaps but no common overlap + # overlap: overlap between each two cell assemblies (0.1 equals "OVERLAP10 no ABC" and so on) + # return: the three assemblies + def no_ABC_overlap(overlap): + tot_wo_overlap = 1-overlap + tot2_wo_overlap_oh = 2 - 2*overlap + core1 = np.arange(core_size) + core2 = np.arange(int(np.round(tot_wo_overlap*core_size)), int(np.round(tot_wo_overlap*core_size))+core_size) + core3 = np.concatenate(( np.arange(int(np.round(tot2_wo_overlap_oh*core_size)), int(np.round(tot2_wo_overlap_oh*core_size))+int(np.round(tot_wo_overlap*core_size))), \ + np.arange(int(np.round(overlap*core_size))) )) + return (core1, core2, core3) + + # no_AC_no_ABC_overlap + # Returns the neuron number belonging to each of three assemblies with "no AC, no ABC" overlap + # overlap: overlap between each two cell assemblies (0.1 equals "OVERLAP10 no AC, no ABC" and so on) + # return: the three assemblies + def no_AC_no_ABC_overlap(overlap): + tot_wo_overlap = 1-overlap + tot2_wo_overlap_oh = 2 - 2*overlap + core1 = np.arange(core_size) + core2 = np.arange(int(np.round(tot_wo_overlap*core_size)), int(np.round(tot_wo_overlap*core_size))+core_size) + core3 = np.arange(int(np.round(tot2_wo_overlap_oh*core_size)), int(np.round(tot2_wo_overlap_oh*core_size))+core_size) + return (core1, core2, core3) + + # no_BC_no_ABC_overlap + # Returns the neuron number belonging to each of three assemblies with "no BC, no ABC" overlap + # overlap: overlap between each two cell assemblies (0.1 equals "OVERLAP10 no BC, no ABC" and so on) + # return: the three assemblies + def no_BC_no_ABC_overlap(overlap): + tot_wo_overlap = 1-overlap + core1 = np.arange(core_size) + core2 = np.arange(int(np.round(tot_wo_overlap*core_size)), int(np.round(tot_wo_overlap*core_size))+core_size) + core3 = np.concatenate(( np.arange(int(np.round(tot_wo_overlap*core_size))+core_size, 2*int(np.round(tot_wo_overlap*core_size))+core_size), \ + np.arange(int(np.round(overlap*core_size))) )) + return (core1, core2, core3) + + # handling the different overlap paradigms: + if paradigm == "NOOVERLAP": + core1 = np.arange(core_size) + core2 = np.arange(core_size, 2*core_size) + core3 = np.arange(2*core_size, 3*core_size) + elif paradigm == "OVERLAP10": + core1, core2, core3 = full_overlap(0.1) + elif paradigm == "OVERLAP10 no ABC": + core1, core2, core3 = no_ABC_overlap(0.1) + elif paradigm == "OVERLAP10 no AC, no ABC": + core1, core2, core3 = no_AC_no_ABC_overlap(0.1) + elif paradigm == "OVERLAP10 no BC, no ABC": + core1, core2, core3 = no_BC_no_ABC_overlap(0.1) + elif paradigm == "OVERLAP15": + core1, core2, core3 = full_overlap(0.15) + elif paradigm == "OVERLAP15 no ABC": + core1, core2, core3 = no_ABC_overlap(0.15) + elif paradigm == "OVERLAP15 no AC, no ABC": + core1, core2, core3 = no_AC_no_ABC_overlap(0.15) + elif paradigm == "OVERLAP15 no BC, no ABC": + core1, core2, core3 = no_BC_no_ABC_overlap(0.15) + elif paradigm == "OVERLAP20": + core1, core2, core3 = full_overlap(0.2) + elif paradigm == "OVERLAP20 no ABC": + core1, core2, core3 = no_ABC_overlap(0.2) + elif paradigm == "OVERLAP20 no AC, no ABC": + core1, core2, core3 = no_AC_no_ABC_overlap(0.2) + elif paradigm == "OVERLAP20 no BC, no ABC": + core1, core2, core3 = no_BC_no_ABC_overlap(0.2) + else: + raise ValueError("Unknown paradigm: " + paradigm) + + return (core1, core2, core3) diff --git a/analysis/probDistributions.py b/analysis/probDistributions.py deleted file mode 100755 index 93498ed..0000000 --- a/analysis/probDistributions.py +++ /dev/null @@ -1,542 +0,0 @@ -############################################################################################ -### Functions to analyze and plot weight and activity distributions from simulation data ### -############################################################################################ - -### Copyright 2019-2021 Jannik Luboeinski -### licensed under Apache-2.0 (http://www.apache.org/licenses/LICENSE-2.0) - -from readWeightData import * -import sys -import os.path -import numpy as np -from pathlib import Path -from subprocess import call - -# plotDistributions -# Creates data and plot files of the weight and activity distribution at a given time -# nppath: path to the network_plots directory to read the data from -# timestamp: a string containing date and time (to access correct paths) -# add: additional descriptor -# Nl: the number of excitatory neurons in one line of a quadratic grid -# time: the time that at which the weights shall be read out -# core: array of indices of the cell assembly (core) neurons -def plotDistributions(nppath, timestamp, add, Nl, time, core): - - # look for data file [timestamp]_net_[time].txt - path = "" - rawpaths = Path(nppath) - - for x in rawpaths.iterdir(): - - tmppath = str(x) - - if (timestamp + "_net_" + time + ".txt") in tmppath: - path = tmppath - - if path == "": - raise FileNotFoundError('"' + timestamp + '_net_' + time + '.txt" was not found') - - # read data from file - try: - connections, h, z, v = readWeightMatrixData(path, Nl) - - except ValueError: - raise - except OSError: - raise - - # determine subpopulations - N_tot = Nl**2 # total number of neurons - N_CA = len(core) # number of neurons in the cell assembly - N_control = N_tot - N_CA # number of neurons in the control subpopulation - all = np.arange(N_tot) - noncore = all[np.logical_not(np.in1d(all, core))] # array of indices of the neurons not in the cell assembly (core) - - block_CA_within = np.ones((N_CA, N_CA), dtype=bool) # array of ones for the synapses within the cell assembly - block_CA_outgoing = np.ones((N_CA, N_control), dtype=bool) # array of ones for the synapses outgoing from the cell assembly - block_CA_incoming = np.ones((N_control, N_CA), dtype=bool) # array of ones for the synapses incoming to the cell assembly - block_control = np.ones((N_control, N_control), dtype=bool) # array of ones for the synapses within the control subpopulation - - mask_CA_within = np.append(np.append(block_CA_within, np.logical_not(block_CA_outgoing), axis=1), \ - np.logical_not(np.append(block_CA_incoming, block_control, axis=1)), - axis=0) # binary mask defining the synapses within the cell assembly - mask_CA_outgoing = np.append(np.append(np.logical_not(block_CA_within), block_CA_outgoing, axis=1), \ - np.logical_not(np.append(block_CA_incoming, block_control, axis=1)), - axis=0) # binary mask defining the synapses outgoing from the cell assembly - mask_CA_incoming = np.append(np.logical_not(np.append(block_CA_within, block_CA_outgoing, axis=1)), \ - np.append(block_CA_incoming, np.logical_not(block_control), axis=1), - axis=0) # binary mask defining the synapses incoming to the cell assembly - mask_control = np.append(np.logical_not(np.append(block_CA_within, block_CA_outgoing, axis=1)), \ - np.append(np.logical_not(block_CA_incoming), block_control, axis=1), - axis=0) # binary mask defining the synapses within the control subpopulation - - h_CA_within = h[mask_CA_within] - h_CA_outgoing = h[mask_CA_outgoing] - h_CA_incoming = h[mask_CA_incoming] - h_control = h[mask_control] - - z_CA_within = z[mask_CA_within] - z_CA_outgoing = z[mask_CA_outgoing] - z_CA_incoming = z[mask_CA_incoming] - z_control = z[mask_control] - - v_CA = v.flatten()[np.in1d(all, core)] - v_control = v.flatten()[np.logical_not(np.in1d(all, core))] - - hstep = (np.max(h)-np.min(h)) / 100 - zstep = (np.max(z)-np.min(z)) / 100 - vstep = (np.max(v)-np.min(v)) / 100 - - binh = np.concatenate((np.linspace(np.min(h), np.max(h), 100), np.max(h)), axis=None) # create range of bins for marginalProbDist(h...) - binz = np.concatenate((np.linspace(np.min(z), np.max(z), 100), np.max(z)), axis=None) # create range of bins for marginalProbDist(z...) - binv = np.concatenate((np.linspace(np.min(v), np.max(v), 100), np.max(v)), axis=None) # create range of bins for marginalProbDist(v...) - - valh = np.linspace(np.min(h), np.max(h), 100) + hstep/2 # use mean values instead of lower bounds of the bins as values - valz = np.linspace(np.min(z), np.max(z), 100) + zstep/2 # use mean values instead of lower bounds of the bins as values - valv = np.linspace(np.min(v), np.max(v), 100) + vstep/2 # use mean values instead of lower bounds of the bins as values - - buf, ph_CA_within = marginalProbDist(h_CA_within, binning = True, bin_edges = binh) - buf, ph_CA_outgoing = marginalProbDist(h_CA_outgoing, binning = True, bin_edges = binh) - buf, ph_CA_incoming = marginalProbDist(h_CA_incoming, binning = True, bin_edges = binh) - buf, ph_control = marginalProbDist(h_control, binning = True, bin_edges = binh) - buf, pz_CA_within = marginalProbDist(z_CA_within, binning = True, bin_edges = binz) - buf, pz_CA_outgoing = marginalProbDist(z_CA_outgoing, binning = True, bin_edges = binz) - buf, pz_CA_incoming = marginalProbDist(z_CA_incoming, binning = True, bin_edges = binz) - buf, pz_control = marginalProbDist(z_control, binning = True, bin_edges = binz) - buf, pv_CA = marginalProbDist(v_CA, binning = True, bin_edges = binv) - buf, pv_control = marginalProbDist(v_control, binning = True, bin_edges = binv) - - f = open(timestamp + "_eweight_dist_" + time + add + ".txt", "w") - for i in range(len(valh)): - f.write(str(valh[i]) + "\t\t" + str(ph_CA_within[i]) + "\t\t" + str(ph_CA_outgoing[i]) + "\t\t" + \ - str(ph_CA_incoming[i]) + "\t\t" + str(ph_control[i]) + "\n") - f.close() - - f = open(timestamp + "_lweight_dist_" + time + add + ".txt", "w") - for i in range(len(valz)): - f.write(str(valz[i]) + "\t\t" + str(pz_CA_within[i]) + "\t\t" + str(pz_CA_outgoing[i]) + "\t\t" + \ - str(pz_CA_incoming[i]) + "\t\t" + str(pz_control[i]) + "\n") - f.close() - - f = open(timestamp + "_act_dist_" + time + add + ".txt", "w") - for i in range(len(valv)): - f.write(str(valv[i]) + "\t\t" + str(pv_CA[i]) + "\t\t" + str(pv_control[i]) + "\n") - f.close() - - if os.path.exists("plot_dist.gpl"): - f = open("plot_dist.gpl", "a") - else: - f = open("plot_dist.gpl", "w") - f.write("#set terminal png size 1024,640 enhanced\nset terminal pdf enhanced\n\n" + \ - "#set style fill transparent solid 0.8 noborder\n" + \ - "set style fill transparent pattern 4 bo\n" + \ - "set log y\nset format y \"%.0e\"\nset yrange [3e-06:1]\nset key outside\n\n") - - f.write("set output \"" + timestamp + "_eweight_dist_" + time + add + ".pdf\"\n") - f.write("set xlabel \"Early-phase weight / nC\"\nset ylabel \"Relative frequency\"\n") - f.write("plot [0.3:0.9] \"" + timestamp + "_eweight_dist_" + time + add + ".txt\" using 1:($1 > 0 ? $2 : $2) t \"CA\" with boxes, \\\n" + \ - "\"\" using 1:($1 > 0 ? $3 : $3) t \"outgoing\" with boxes, \\\n" + \ - "\"\" using 1:($1 > 0 ? $4 : $4) t \"incoming\" with boxes, \\\n" + \ - "\"\" using 1:($1 > 0 ? $5 : $5) t \"control\" with boxes\n") - - f.write("\nset output \"" + timestamp + "_lweight_dist_" + time + add + ".pdf\"\n") - f.write("set xlabel \"Late-phase weight\"\nset ylabel \"Relative frequency\"\nset format y \"%.0e\"\n") - f.write("plot \"" + timestamp + "_lweight_dist_" + time + add + ".txt\" using 1:($1 > 0 ? $2 : $2) t \"CA\" with boxes, \\\n" + \ - "\"\" using 1:($1 > 0 ? $3 : $3) t \"outgoing\" with boxes, \\\n" + \ - "\"\" using 1:($1 > 0 ? $4 : $4) t \"incoming\" with boxes, \\\n" + \ - "\"\" using 1:($1 > 0 ? $5 : $5) t \"control\" with boxes\n") - - f.write("\nset output \"" + timestamp + "_act_dist_" + time + add + ".pdf\"\n") - f.write("set xlabel \"Neuronal firing rate / Hz\"\nset ylabel \"Relative frequency\"\nset format y \"%.0e\"\n") - f.write("plot \"" + timestamp + "_act_dist_" + time + add + ".txt\" using 1:2 t \"CA\" with boxes, " + \ - "\"\" using 1:3 t \"control\" with boxes\n\n") - - f.close() - - call(["gnuplot", "plot_dist.gpl"]) - -# plotDistributions3CAs -# Creates data and plot files of the weight distribution of a network with 3 overlapping assemblies at a given time -# nppath: path to the network_plots directory to read the data from -# timestamp: a string containing date and time (to access correct paths) -# add: additional descriptor -# Nl: the number of excitatory neurons in one line of a quadratic grid -# time: the time that at which the weights shall be read out -# coreA: array of indices of the first cell assembly (core) neurons -# coreB [optional]: array of indices of the second cell assembly (core) neurons -# coreC [optional]: array of indices of the third cell assembly (core) neurons -def plotDistributions3CAs(nppath, timestamp, add, Nl, time, coreA, coreB = None, coreC = None): - - # Look for data file ("*_net_*", if only already averaged file "*_net_av_*" is available, exception is raised) - path = "" - rawpaths = Path(nppath) - - for x in rawpaths.iterdir(): - - tmppath = str(x) - - if (timestamp + "_net_" + time + ".txt") in tmppath: - already_averaged = False - path = tmppath - elif path == "" and (timestamp + "_net_av_" + time + ".txt") in tmppath: - already_averaged = True - path = tmppath - - if path == "": - raise FileNotFoundError('neither "' + timestamp + '_net_' + time_for_readout + '.txt" nor "' + timestamp + '_net_av_' + time_for_readout + '.txt" was found') - elif already_averaged == True: - raise FileNotFoundError('only "' + timestamp + '_net_av_' + time_for_readout + '.txt" was found, not "' + timestamp + '_net_' + time_for_readout + '.txt", which is needed for information-theoretic computations') - - # read data from file - try: - connections, h, z, v = readWeightMatrixData(path, Nl) - - except ValueError: - raise - except OSError: - raise - - # determine synapses within the cell assemblies - N_tot = Nl**2 # total number of neurons - - mask_coreA = np.zeros((N_tot, N_tot), dtype=bool) - for syn_pre in coreA: - for syn_post in coreA: - mask_coreA[syn_pre,syn_post] = True - - mask_coreB = np.zeros((N_tot, N_tot), dtype=bool) - if coreB is not None: - for syn_pre in coreB: - for syn_post in coreB: - mask_coreB[syn_pre,syn_post] = True - - mask_coreC = np.zeros((N_tot, N_tot), dtype=bool) - if coreC is not None: - for syn_pre in coreC: - for syn_post in coreC: - mask_coreC[syn_pre,syn_post] = True - - # find control synapses (all synapses that are not within a cell assembly) - mask_control = np.logical_not(np.logical_or(mask_coreA, np.logical_or(mask_coreB, mask_coreC))) - - # find exclusive intersections - mask_I_AB = np.logical_and( np.logical_and(mask_coreA, mask_coreB), np.logical_not(mask_coreC) ) - mask_I_AC = np.logical_and( np.logical_and(mask_coreA, mask_coreC), np.logical_not(mask_coreB) ) - mask_I_BC = np.logical_and( np.logical_and(mask_coreB, mask_coreC), np.logical_not(mask_coreA) ) - mask_I_ABC = np.logical_and( mask_coreA, np.logical_and(mask_coreB, mask_coreC) ) - - # remove intersections from exclusive cores - mask_coreA = np.logical_and(mask_coreA, \ - np.logical_and(np.logical_not(mask_I_AB), \ - np.logical_and(np.logical_not(mask_I_AC), np.logical_not(mask_I_ABC)))) - mask_coreB = np.logical_and(mask_coreB, \ - np.logical_and(np.logical_not(mask_I_AB), \ - np.logical_and(np.logical_not(mask_I_BC), np.logical_not(mask_I_ABC)))) - mask_coreC = np.logical_and(mask_coreC, \ - np.logical_and(np.logical_not(mask_I_AC), \ - np.logical_and(np.logical_not(mask_I_BC), np.logical_not(mask_I_ABC)))) - - # tests (each should yield true) - '''print("Test:", not np.any(np.logical_and(mask_coreA, mask_coreB))) - print("Test:", not np.any(np.logical_and(mask_coreA, mask_coreC))) - print("Test:", not np.any(np.logical_and(mask_coreB, mask_coreC))) - print("Test:", not np.any(np.logical_and(mask_I_AB, mask_I_BC))) - print("Test:", not np.any(np.logical_and(mask_I_AB, mask_I_AC))) - print("Test:", not np.any(np.logical_and(mask_I_AB, mask_I_ABC))) - print("Test:", not np.any(np.logical_and(mask_I_AC, mask_I_BC))) - print("Test:", not np.any(np.logical_and(mask_I_AC, mask_I_ABC))) - print("Test:", not np.any(np.logical_and(mask_I_BC, mask_I_ABC))) - print("Test:", not np.any(np.logical_and(mask_control, mask_coreA))) - print("Test:", not np.any(np.logical_and(mask_control, mask_coreB))) - print("Test:", not np.any(np.logical_and(mask_control, mask_coreC))) - print("Test:", not np.any(np.logical_and(mask_control, mask_I_AB))) - print("Test:", not np.any(np.logical_and(mask_control, mask_I_AC))) - print("Test:", not np.any(np.logical_and(mask_control, mask_I_BC))) - print("Test:", not np.any(np.logical_and(mask_control, mask_I_ABC))) - print("Test:", not np.any(np.logical_and(mask_coreA, mask_I_AB))) - print("Test:", not np.any(np.logical_and(mask_coreA, mask_I_AC))) - print("Test:", not np.any(np.logical_and(mask_coreA, mask_I_BC))) - print("Test:", not np.any(np.logical_and(mask_coreA, mask_I_ABC))) - print("Test:", not np.any(np.logical_and(mask_coreB, mask_I_AB))) - print("Test:", not np.any(np.logical_and(mask_coreB, mask_I_AC))) - print("Test:", not np.any(np.logical_and(mask_coreB, mask_I_BC))) - print("Test:", not np.any(np.logical_and(mask_coreB, mask_I_ABC))) - print("Test:", not np.any(np.logical_and(mask_coreC, mask_I_AB))) - print("Test:", not np.any(np.logical_and(mask_coreC, mask_I_AC))) - print("Test:", not np.any(np.logical_and(mask_coreC, mask_I_BC))) - print("Test:", not np.any(np.logical_and(mask_coreC, mask_I_ABC)))''' - - - h_coreA = h[mask_coreA] - h_coreB = h[mask_coreB] - h_coreC = h[mask_coreC] - h_I_AB = h[mask_I_AB] - h_I_AC = h[mask_I_AC] - h_I_BC = h[mask_I_BC] - h_I_ABC = h[mask_I_ABC] - h_control = h[mask_control] - - z_coreA = z[mask_coreA] - z_coreB = z[mask_coreB] - z_coreC = z[mask_coreC] - z_I_AB = z[mask_I_AB] - z_I_AC = z[mask_I_AC] - z_I_BC = z[mask_I_BC] - z_I_ABC = z[mask_I_ABC] - z_control = z[mask_control] - - # mean and standard deviation of the subpopulations - #mean_z_coreA = np.mean(z_coreA) - #mean_z_coreB = np.mean(z_coreB) - #mean_z_coreC = np.mean(z_coreC) - #mean_z_I_AB = np.mean(z_I_AB) - #mean_z_I_AC = np.mean(z_I_AC) - #mean_z_I_BC = np.mean(z_I_BC) - #mean_z_I_ABC = np.mean(z_I_ABC) - #mean_z_control = np.mean(z_control) - #sd_z_coreA = np.std(z_coreA) - #sd_z_coreB = np.std(z_coreB) - #sd_z_coreC = np.std(z_coreC) - #sd_z_I_AB = np.std(z_I_AB) - #sd_z_I_AC = np.std(z_I_AC) - #sd_z_I_BC = np.std(z_I_BC) - #sd_z_I_ABC = np.std(z_I_ABC) - #sd_z_control = np.std(z_control) - - hstep = (np.max(h)-np.min(h)) / 100 - zstep = (np.max(z)-np.min(z)) / 100 - - binh = np.concatenate((np.linspace(np.min(h), np.max(h), 100), np.max(h)), axis=None) # create range of bins for marginalProbDist(h...) - binz = np.concatenate((np.linspace(np.min(z), np.max(z), 100), np.max(z)), axis=None) # create range of bins for marginalProbDist(z...) - - valh = np.linspace(np.min(h), np.max(h), 100) + hstep/2 # use mean values instead of lower bounds of the bins as values - valz = np.linspace(np.min(z), np.max(z), 100) + zstep/2 # use mean values instead of lower bounds of the bins as values - - numconn = len(h[connections]) - - buf, ph_coreA = marginalProbDist(h_coreA, binning = True, bin_edges = binh, norm = numconn) - buf, pz_coreA = marginalProbDist(z_coreA, binning = True, bin_edges = binz, norm = numconn) - if coreB is not None: - buf, ph_coreB = marginalProbDist(h_coreB, binning = True, bin_edges = binh, norm = numconn) - buf, pz_coreB = marginalProbDist(z_coreB, binning = True, bin_edges = binz, norm = numconn) - if h_I_AB.size > 0: - buf, ph_I_AB = marginalProbDist(h_I_AB, binning = True, bin_edges = binh, norm = numconn) - buf, pz_I_AB = marginalProbDist(z_I_AB, binning = True, bin_edges = binz, norm = numconn) - if coreC is not None: - buf, ph_coreC = marginalProbDist(h_coreC, binning = True, bin_edges = binh, norm = numconn) - buf, pz_coreC = marginalProbDist(z_coreC, binning = True, bin_edges = binz, norm = numconn) - if h_I_AC.size > 0: - buf, ph_I_AC = marginalProbDist(h_I_AC, binning = True, bin_edges = binh, norm = numconn) - buf, pz_I_AC = marginalProbDist(z_I_AC, binning = True, bin_edges = binz, norm = numconn) - if h_I_BC.size > 0: - buf, ph_I_BC = marginalProbDist(h_I_BC, binning = True, bin_edges = binh, norm = numconn) - buf, pz_I_BC = marginalProbDist(z_I_BC, binning = True, bin_edges = binz, norm = numconn) - if h_I_ABC.size > 0: - buf, ph_I_ABC = marginalProbDist(h_I_ABC, binning = True, bin_edges = binh, norm = numconn) - buf, pz_I_ABC = marginalProbDist(z_I_ABC, binning = True, bin_edges = binz, norm = numconn) - buf, ph_control = marginalProbDist(h_control, binning = True, bin_edges = binh, norm = numconn) - buf, pz_control = marginalProbDist(z_control, binning = True, bin_edges = binz, norm = numconn) - - # Write weight distribution data files - fh = open(timestamp + "_eweight_dist_" + time + add + ".txt", "w") - fz = open(timestamp + "_lweight_dist_" + time + add + ".txt", "w") - - for i in range(len(valh)): - fh.write(str(valh[i]) + "\t\t" + str(ph_coreA[i]) + "\t\t") - fz.write(str(valz[i]) + "\t\t" + str(pz_coreA[i]) + "\t\t") - - if coreB is not None: - fh.write(str(ph_coreB[i]) + "\t\t") - fz.write(str(pz_coreB[i]) + "\t\t") - - if h_I_AB.size > 0: - fh.write(str(ph_I_AB[i]) + "\t\t") - fz.write(str(pz_I_AB[i]) + "\t\t") - else: - fh.write("nan\t\t") - fz.write("nan\t\t") - else: - fh.write("nan\t\tnan\t\t") - fz.write("nan\t\tnan\t\t") - - if coreC is not None: - fh.write(str(ph_coreC[i]) + "\t\t") - fz.write(str(pz_coreC[i]) + "\t\t") - - if h_I_AC.size > 0: - fh.write(str(ph_I_AC[i]) + "\t\t") - fz.write(str(pz_I_AC[i]) + "\t\t") - else: - fh.write("nan\t\t") - fz.write("nan\t\t") - - if h_I_BC.size > 0: - fh.write(str(ph_I_BC[i]) + "\t\t") - fz.write(str(pz_I_BC[i]) + "\t\t") - else: - fh.write("nan\t\t") - fz.write("nan\t\t") - - if h_I_ABC.size > 0: - fh.write(str(ph_I_ABC[i]) + "\t\t") - fz.write(str(pz_I_ABC[i]) + "\t\t") - else: - fh.write("nan\t\t") - fz.write("nan\t\t") - else: - fh.write("nan\t\tnan\t\tnan\t\tnan\t\t") - fz.write("nan\t\tnan\t\tnan\t\tnan\t\t") - fh.write(str(ph_control[i]) + "\n") - fz.write(str(pz_control[i]) + "\n") - - fh.close() - fz.close() - - if os.path.exists("plot_dist.gpl"): - f = open("plot_dist.gpl", "a") - else: - f = open("plot_dist.gpl", "w") - f.write("#set terminal png size 1024,640 enhanced\nset terminal pdf enhanced\n\n" + \ - "set style fill transparent pattern 4 bo\n" + \ - "set log y\nset format y \"%.0e\"\nset yrange [3e-06:1]\nset key outside\n\n") - - f.write("set output \"" + timestamp + "_eweight_dist_" + time + add + ".pdf\"\n") - f.write("set xlabel \"Early-phase weight / nC\"\nset ylabel \"Relative frequency\"\n") - f.write("plot [0.3:0.9] \"" + timestamp + "_eweight_dist_" + time + add + ".txt\" using 1:($1 > 0 ? $2 : $2) t \"A\" with boxes, \\\n") - if coreB is not None: - f.write("\"\" using 1:($1 > 0 ? $3 : $3) t \"B\" with boxes, \\\n") - if coreC is not None: - f.write("\"\" using 1:($1 > 0 ? $5 : $5) t \"C\" with boxes, \\\n") - if coreB is not None: - f.write("\"\" using 1:($1 > 0 ? $4 : $4) t \"I_{AB}\" with boxes, \\\n") - if coreC is not None: - f.write("\"\" using 1:($1 > 0 ? $6 : $6) t \"I_{AC}\" with boxes, \\\n") - f.write("\"\" using 1:($1 > 0 ? $7 : $7) t \"I_{BC}\" with boxes, \\\n") - f.write("\"\" using 1:($1 > 0 ? $8 : $8) t \"I_{ABC}\" with boxes, \\\n") - f.write("\"\" using 1:($1 > 0 ? $9 : $9) t \"control\" lc rgb \"#eeeeee\" with boxes\n") - - f.write("\nset output \"" + timestamp + "_lweight_dist_" + time + add + ".pdf\"\n") - f.write("set xlabel \"Late-phase weight\"\nset ylabel \"Relative frequency\"\nset format y \"%.0e\"\n") - f.write("plot \"" + timestamp + "_lweight_dist_" + time + add + ".txt\" using 1:($1 > 0 ? $2 : $2) t \"A\" with boxes, \\\n") - if coreB is not None: - f.write("\"\" using 1:($1 > 0 ? $3 : $3) t \"B\" with boxes, \\\n") - if coreC is not None: - f.write("\"\" using 1:($1 > 0 ? $5 : $5) t \"C\" with boxes, \\\n") - if coreB is not None: - f.write("\"\" using 1:($1 > 0 ? $4 : $4) t \"I_{AB}\" with boxes, \\\n") - if coreC is not None: - f.write("\"\" using 1:($1 > 0 ? $6 : $6) t \"I_{AC}\" with boxes, \\\n") - f.write("\"\" using 1:($1 > 0 ? $7 : $7) t \"I_{BC}\" with boxes, \\\n") - f.write("\"\" using 1:($1 > 0 ? $8 : $8) t \"I_{ABC}\" with boxes, \\\n") - f.write("\"\" using 1:($1 > 0 ? $9 : $9) t \"control\" lc rgb \"#eeeeee\" with boxes fill transparent pattern 4\n") - - f.close() - - call(["gnuplot", "plot_dist.gpl"]) - -# mapToBins -# Maps the values of an array to bins defined by another array -# a: array of values -# b: array of bin edges (including the terminal one) -# return: array of the shape of a with values of b -def mapToBins(a, b): - a = a.flatten() - - for i in range(len(a)): - - if a[i] < b[0] or a[i] > b[len(b)-1]: - raise ValueError("Value " + a[i] + " at index " + i + " is out of range.") - - for j in reversed(range(len(b)-1)): - if a[i] >= b[j]: - a[i] = (b[j+1] + b[j]) / 2 - break - return a - -# marginalProbDist -# Computes a marginal probability distribution -# a: array of outcomes (e.g., array of activities of all neurons in a network or array of weights of all synapses in a network) -# binning [optional]: specifies if bins are used to discretize the values of the distribution -# bin_edges [optional]: pre-specified range of binning edges; only applicable if binning is used -# norm [optional]: value by which to normalize -# return: array of values and corresponding array of their probabilities -def marginalProbDist(a, binning = False, bin_edges = None, norm = None): - - if binning == True: - - if np.max(a) != np.min(a): - values_low = np.arange(np.min(a), np.max(a), (np.max(a)-np.min(a)) / 100) # create array of activity value bins - values_mean = values_low + (np.max(a)-np.min(a)) / 200 # use mean values instead of lower bounds of the bins as values - else: - values_low = np.array([np.min(a)]) - values_mean = values_low - - if bin_edges is None: - bin_edges = np.concatenate((values_low, np.max(a)), axis=None) # add last edge - - freq_of_values = np.histogram(a, bin_edges)[0] # create histogram of activity value occurrences - - #values, freq_of_values = np.unique(mapToBins(a, bin_edges), return_counts=True, axis=0) # yields the same result - else: - #a = np.sort(a) - values_mean, freq_of_values = np.unique(a, return_counts=True) # determine and count occuring activities - freq_of_values = freq_of_values / np.sum(freq_of_values) # normalize - #dist = np.asarray(values, freq_of_values).T # create distribution - - if norm is None: - freq_of_values = freq_of_values / np.sum(freq_of_values) # normalize only over this quantity - else: - freq_of_values = freq_of_values / norm # normalize over all values - - return values_mean, freq_of_values - -# jointProbDist -# Computes a joint probability distribution -# a1: first array of outcomes (e.g., array of activities of all neurons in a network or array of weights of all synapses in a network) -# a2: second array of outcomes -# binning [optional]: specifies if bins are used to discretize the values of the distribution -# return: array of value pairs and corresponding array of their probabilities -def jointProbDist(a1, a2, binning = False): - - if binning == True: - ab = np.concatenate((a1,a2)) - - try: - values = np.arange(np.min(ab), np.max(ab), (np.max(ab)-np.min(ab)) / 100) # create array of activity value bins - except ValueError: - values = np.array([np.min(ab)]) - bin_edges = np.concatenate((values, np.max(ab)), axis=None) # add last edge - a1 = mapToBins(a1, bin_edges) - a2 = mapToBins(a2, bin_edges) - - ja = np.array([a1.flatten(), a2.flatten()]).T # array of pairs of the two outcomes (for one neuron or synapse or ... at two times) - - value_pairs, freq_of_values = np.unique(ja, return_counts=True, axis=0) # determine and count occurring activities - freq_of_values = freq_of_values / np.sum(freq_of_values) # normalize - else: - ja = np.array([a1.flatten(),a2.flatten()]).T # array of pairs of the two outcomes (for one neuron or synapse or ... at two times) - - value_pairs, freq_of_values = np.unique(ja, return_counts=True, axis=0) # determine and count occurring activities - freq_of_values = freq_of_values / np.sum(freq_of_values) # normalize - #dist = np.asarray(value_pairs, freq_of_values).T # create distribution - - return value_pairs, freq_of_values - -# main -# Plots distributions from two simulations at different times -# argv[]: timestamps of two simulations - -if __name__ == "__main__": - - if len(sys.argv) < 3: - print("Not enough arguments provided!") - exit() - else: - ts1 = str(sys.argv[1]) # timestamp for simulation data before consolidation - ts2 = str(sys.argv[2]) # timestamp for simulation data after consolidation - - core = np.arange(150) # size of the assembly - plotDistributions(".", ts1, "_150default", 40, "20.0", core) # before 10s-recall - plotDistributions(".", ts1, "_150default", 40, "20.1", core) # after 10s-recall - plotDistributions(".", ts2, "_150default", 40, "28810.0", core) # before 8h-recall - plotDistributions(".", ts2, "_150default", 40, "28810.1", core) # after 8h-recall diff --git a/analysis/readWeightData.py b/analysis/readWeightData.py deleted file mode 100755 index f6eaf96..0000000 --- a/analysis/readWeightData.py +++ /dev/null @@ -1,88 +0,0 @@ -################################################################################################ -### Functions to read the connections, early- and late-phase weight matrix, and firing rates ### -### from network simulation data ### -################################################################################################ - -### Copyright 2017-2021 Jannik Luboeinski -### licensed under Apache-2.0 (http://www.apache.org/licenses/LICENSE-2.0) - -import numpy as np - -# readWeightMatrixData -# Reads complete weight matrix data from a file (modified from plotFunctions.py) -# filename: name of the file to read the data from -# Nl: number of neurons in one row/column -# return: the adjacency matrix, the early-phase weight matrix, the late-phase weight matrix, the firing rate vector -def readWeightMatrixData(filename, Nl): - - # read weight matrices and firing rates from file - try: - with open(filename) as f: - rawdata = f.read() - except OSError: - raise - - rawdata = rawdata.split('\n\n') - rawmatrix_h = rawdata[0].split('\n') - rawmatrix_z = rawdata[1].split('\n') - rawmatrix_v = rawdata[2].split('\n') - - rows = len(rawmatrix_v) - - if (rows != len(rawmatrix_v[0].split('\t\t'))) or (rows != Nl): - raise ValueError(str(rows) + ' instead of ' + str(Nl) + ' lines in data file "' + filename + '"') - f.close() - exit() - - v = np.zeros((Nl,Nl)) - h = np.zeros((Nl**2,Nl**2)) - z = np.zeros((Nl**2,Nl**2)) - - for i in range(Nl**2): - if i < Nl: - value0 = rawmatrix_v[i].split('\t\t') - value1 = rawmatrix_h[i].split('\t\t') - value2 = rawmatrix_z[i].split('\t\t') - - for j in range(Nl**2): - if i < Nl and j < Nl: - v[i][j] = float(value0[j]) - h[i][j] = float(value1[j]) - z[i][j] = float(value2[j]) - - f.close() - connections = (h > 0) - - return connections, h, z, v - -# readWeightVectorData -# Reads complete weight vector data from a file -# filename: name of the file to read the data from -# N: the number of synapses -# return: the early-phase weight vector and the late-phase weight vector -def readWeightVectorData(filename, N): - - try: - with open(filename) as f: - rawdata = f.read() - except OSError: - raise - - rawdata = rawdata.split('\n') - - if (len(rawdata) - 1 != N): - raise ValueError(str(len(rawdata) - 1) + ' instead of ' + str(N) + ' lines in data file "' + filename + '"') - f.close() - exit() - - h = np.zeros(N) - z = np.zeros(N) - - for i in range(N): - values = rawdata[i].split('\t\t') - h[i] = float(values[0]) - z[i] = float(values[1]) - - f.close() - - return h, z diff --git a/analysis/utilityFunctions.py b/analysis/utilityFunctions.py new file mode 100755 index 0000000..9517ebb --- /dev/null +++ b/analysis/utilityFunctions.py @@ -0,0 +1,246 @@ +######################################################################################## +### Utility function for different purposes regarding the reading of simulation data ### +######################################################################################## + +### Copyright 2017-2021 Jannik Luboeinski +### licensed under Apache-2.0 (http://www.apache.org/licenses/LICENSE-2.0) + +import numpy as np +import os +from pathlib import Path +epsilon = 1e-11 # very small number that is counted as zero + +###################################### +# readWeightMatrixData +# Reads complete early- and late-phase weight matrix and firing rates from network simulation data +# (also cf. loadWeightMatrix in adjacencyFunctions.py and readWeightMatrix in the simulation code) +# filename: name of the file to read the data from +# Nl_exc: number of neurons in one row/column +# return: the adjacency matrix, the early-phase weight matrix, the late-phase weight matrix, the firing rate vector +def readWeightMatrixData(filename, Nl_exc): + + # read weight matrices and firing rates from file + try: + with open(filename) as f: + rawdata = f.read() + except OSError: + raise + + rawdata = rawdata.split('\n\n') + rawmatrix_h = rawdata[0].split('\n') + rawmatrix_z = rawdata[1].split('\n') + rawmatrix_v = rawdata[2].split('\n') + + rows = len(rawmatrix_v) + + if (rows != len(rawmatrix_v[0].split('\t\t'))) or (rows != Nl_exc): + raise ValueError(str(rows) + ' instead of ' + str(Nl_exc) + ' lines in data file "' + filename + '"') + f.close() + exit() + + v = np.zeros((Nl_exc,Nl_exc)) + h = np.zeros((Nl_exc**2,Nl_exc**2)) + z = np.zeros((Nl_exc**2,Nl_exc**2)) + + for i in range(Nl_exc**2): + if i < Nl_exc: + value0 = rawmatrix_v[i].split('\t\t') + value1 = rawmatrix_h[i].split('\t\t') + value2 = rawmatrix_z[i].split('\t\t') + + for j in range(Nl_exc**2): + if i < Nl_exc and j < Nl_exc: + v[i][j] = float(value0[j]) + h[i][j] = float(value1[j]) + z[i][j] = float(value2[j]) + + f.close() + connections = (h > epsilon) + + return connections, h, z, v + +###################################### +# readWeightVectorData +# Reads complete weight vector data from a file +# filename: name of the file to read the data from +# N: the number of synapses +# return: the early-phase weight vector and the late-phase weight vector +def readWeightVectorData(filename, N): + + try: + with open(filename) as f: + rawdata = f.read() + except OSError: + raise + + rawdata = rawdata.split('\n') + + if (len(rawdata) - 1 != N): + raise ValueError(str(len(rawdata) - 1) + ' instead of ' + str(N) + ' lines in data file "' + filename + '"') + f.close() + exit() + + h = np.zeros(N) + z = np.zeros(N) + + for i in range(N): + values = rawdata[i].split('\t\t') + h[i] = float(values[0]) + z[i] = float(values[1]) + + f.close() + + return h, z + +###################################### +# ftos +# Converts a float value to a string with pre-defined number of decimal places +# f: a floating-point number +# output_places [optional]: number of decimal places for output numbers +# return: a string +def ftos(f, output_places = 7): + return str(round(f, output_precision)) + +###################################### +# cond_print +# Prints an argument only if another argument is true +# pr: printing if True, not printing if False +# arg: the argument to be printed +def cond_print(pr, *args): + if pr: + print(*args) + +###################################### +# readParams +# Reads some parameters from a "[timestamp]_PARAMS.txt" file +# path: path to the parameter file to read the data from +# return: a list containing the values of all the parameters read +def readParams(path): + + try: + f = open(path) + + except IOError: + print('Error opening "' + path + '"') + exit() + + r_CA = -1 # radius of the stimulated core + s_CA = -1 + + # read activities from file and determine mean activities for different regions + rawdata = f.read() + rawdata = rawdata.split('\n') + nn = len(rawdata) + f.close() + + for i in range(nn): + segs = rawdata[i].split(' ') + + if segs[0] == "Ca_pre": + Ca_pre = float(segs[2]) + elif segs[0] == "Ca_post": + Ca_post = float(segs[2]) + elif segs[0] == "theta_p": + theta_p = float(segs[2]) + elif segs[0] == "theta_d": + theta_d = float(segs[2]) + elif segs[0] == "R_mem": + R_mem = float(segs[2]) + elif segs[0] == "learning": + if segs[3] == "": + lprot = "none" + else: + lprot = segs[3] + elif segs[0] == "stimulus": # old version of previous condition + lprot = segs[2] + elif segs[0] == "recall" and segs[1] == "stimulus": + rprot = segs[3] + elif segs[0] == "recall" and segs[1] != "fraction": # old version of previous condition + rprot = segs[2] + elif segs[0] == "recall" and segs[1] == "fraction": + recall_fraction = segs[3] + elif segs[0] == "pc" or segs[0] == "p_c": + p_c = float(segs[2]) + elif segs[0] == "w_ei": + w_ei = float(segs[2]) + elif segs[0] == "w_ie": + w_ie = float(segs[2]) + elif segs[0] == "w_ii": + w_ii = float(segs[2]) + elif segs[0] == "theta_pro_c": + theta_pro_c = float(segs[2]) + elif segs[0] == "N_stim": + N_stim = int(segs[2]) + elif segs[0] == "I_const" or segs[0] == "I_0": + I_0 = float(segs[2]) + elif segs[0] == "dt": + dt = float(segs[2]) + elif segs[0] == "core" and (segs[2] == "first" or segs[2] == "random"): + s_CA = int(segs[3]) + + if s_CA == -1: + print('Warning: the size of the stimulated core could not be determined.') + s_CA = int(input('Enter it now to continue: ')) + + return [w_ei, w_ie, w_ii, p_c, Ca_pre, Ca_post, theta_p, theta_d, lprot, rprot, dt, theta_pro_c, s_CA, N_stim, I_0, R_mem, recall_fraction] + +###################################### +# hasTimestamp +# Checks if the given filename starts with a timestamp +# filename: a string +# return: true if presumably there is a timestamp, false if not +def hasTimestamp(filename): + try: + if filename[2] == "-" and filename[5] == "-" and filename[8] == "_" and \ + filename[11] == "-" and filename[14] == "-": + return True + except: + pass + + return False + +###################################### +# mergeRawData +# To merge data from multiple files into a single file - looks in a specified directory +# for files with a certain substring in the filename, and merges them (merging the content of the lines) +# rootpath: relative path to the output directory +# substr: string that the filename of files to be merged has to contain +# output_file: name of the output file +# remove_raw [optional]: removes the raw data files +# sep_str [optional]: the character or string by which to separate the lines in the output file +def mergeRawData(rootpath, substr, output_file, remove_raw=False, sep_str='\t\t'): + + path = Path(rootpath) + num_rows = -1 + all_data = [] + + for x in sorted(path.iterdir()): # loop through files in the output directory + x_str = str(x) + if not x.is_dir() and substr in x_str: + + f = open(x_str) + single_trial_data = f.read() + f.close() + + single_trial_data = single_trial_data.split('\n') + + if single_trial_data[-1] == "": + del single_trial_data[-1] # delete empty line + + if len(single_trial_data) != num_rows: + if num_rows == -1: + num_rows = len(single_trial_data) + all_data = single_trial_data + else: + raise Exception("Wrong number of rows encountered in: " + x_str) + else: + for i in range(num_rows): + all_data[i] += sep_str + single_trial_data[i] + + if remove_raw: + os.remove(x_str) + + fout = open(os.path.join(rootpath, output_file), "w") + for i in range(num_rows): + fout.write(all_data[i] + '\n') + fout.close() diff --git a/analysis/valueDistributions.py b/analysis/valueDistributions.py new file mode 100755 index 0000000..4217bff --- /dev/null +++ b/analysis/valueDistributions.py @@ -0,0 +1,811 @@ +############################################################################################ +### Functions to analyze and plot weight and activity distributions from simulation data ### +############################################################################################ + +### Copyright 2019-2021 Jannik Luboeinski +### licensed under Apache-2.0 (http://www.apache.org/licenses/LICENSE-2.0) + +from utilityFunctions import * +import sys +import warnings +import os.path +import numpy as np +from pathlib import Path +from subprocess import call + +# findOverallMinMax +# Determines the minimum and maximum values across all data files that are located somewhere in a given directory +# and that have the same readout time +# nppath: path to the directory to read the data from +# Nl_exc: the number of excitatory neurons in one line of a quadratic grid +# time_for_readout: the time that at which the weights shall be read out (as a string) +# h_0: the initial weight, and normalization factor for z +# return: two-dimensional array containing the minimum and maximum values for the four different data types +def findOverallMinMax(nppath, Nl_exc, time_for_readout, h_0): + + sysmin, sysmax = sys.float_info.min, sys.float_info.max + (h_min, z_min, w_min, v_min) = (sysmax, sysmax, sysmax, sysmax) # initially, assign the maximum possible value + (h_max, z_max, w_max, v_max) = (sysmin, sysmin, sysmin, sysmin) # initially, assign the minimum possible value + + # recurseFind + # Function to recursively move through directories and look for data to find their minima/maxima + # path: the directory to iterate through + def recurseFindMinMax(path): + nonlocal h_min, z_min, w_min, v_min + nonlocal h_max, z_max, w_max, v_max + + rawpaths = Path(path) + for x in rawpaths.iterdir(): + if x.is_dir(): + recurseFindMinMax(x) # if the found file is a directory, recurse into it + + tmppath = str(x) + if ("_net_" + time_for_readout + ".txt") in tmppath: # file containing network simulation data found + + # read data from file + try: + connections, h, z, v = readWeightMatrixData(tmppath, Nl_exc) + h = h[connections] # reduce h (leave out non-existent synapses) + z = h_0*z[connections] # reduce and normalize z + w = h + z # compute total synaptic weight + except ValueError: + raise + except OSError: + raise + + # checkAndAdjust + # Compares two numbers and returns the larger/lower one, depending on the operator + # a: a floating point number + # b: a floating point number + # op [optional]: the operator to be used + # return: the larger/lower one of the two numbers + def checkAndAdjust(a, b, op=">"): + if b > a: + return b if op == ">" else a + else: + return a if op == ">" else b + + # adjust maxima + h_max = checkAndAdjust(h_max, np.max(h), ">") + z_max = checkAndAdjust(z_max, np.max(z), ">") + w_max = checkAndAdjust(w_max, np.max(w), ">") + v_max = checkAndAdjust(v_max, np.max(v), ">") + + # adjust minima + h_min = checkAndAdjust(h_min, np.min(h), "<") + z_min = checkAndAdjust(z_min, np.min(z), "<") + w_min = checkAndAdjust(w_min, np.min(w), "<") + v_min = checkAndAdjust(v_min, np.min(v), "<") + + # iterate across files in the directory + recurseFindMinMax(nppath) + + return np.array([[h_min, h_max], [z_min, z_max], [w_min, w_max], [v_min, v_max]]) + +# plotDistributions +# Creates data and plot files of the weight and activity distribution at a given time +# nppath: path to the directory to read the data from +# timestamp: a string containing date and time (to access correct paths) OR equal to "any" +# add: additional descriptor +# Nl_exc: the number of excitatory neurons in one line of a quadratic grid +# time_for_readout: the time that at which the weights shall be read out (as a string) +# core: array of indices of the cell assembly (core) neurons +# h_0 [optional]: the initial weight, and normalization factor for z +# norm_all [optional]: specifies whether to normalize across all subpopulations (True) or across each subpop. individually (False) +# - the first is recommendable if samples of different subpopulations are compared against each other, +# the latter is recommendable if different samples of the same subpopulation are compared +# bins [optional]: list of four arrays, each containing the bins for one of the four quantities +def plotDistributions(nppath, timestamp, add, Nl_exc, time_for_readout, core, h_0=0.420075, norm_all=False, bins=None): + + orgdir = os.getcwd() # store the current working directory + + # "any" case: not looking for a specific timestamp, but for any data with a certain time_for_readout in the given directory + if timestamp == "any": + if bins is None: + warnings.warn("Warning: timestamp=\"any\": bins should be provided by the calling function to compare across trials.") + rawpaths = Path(nppath) + for x in rawpaths.iterdir(): + tmppath = os.path.split(str(x))[1] # remove head from path + if ("_net_" + time_for_readout + ".txt") in tmppath: + timestamp = tmppath.split("_net_")[0] + plotDistributions(nppath, timestamp, add, Nl_exc, time_for_readout, core, h_0, norm_all, bins) # call this function again, now with specific timestamp + return + + # read data from file [timestamp]_net_[time_for_readout].txt + os.chdir(nppath) # change to data directory + try: + connections, h, z, v = readWeightMatrixData(timestamp + "_net_" + time_for_readout + ".txt", Nl_exc) + z = h_0*z # normalize z + w = h + z # compute total synaptic weight + except ValueError: + raise + except OSError: + raise + + # determine subpopulations + N_tot = Nl_exc**2 # total number of neurons + N_CA = len(core) # number of neurons in the cell assembly + N_control = N_tot - N_CA # number of neurons in the control subpopulation + all = np.arange(N_tot) + noncore = all[np.logical_not(np.in1d(all, core))] # array of indices of the neurons not in the cell assembly (core) + + block_CA_within = np.ones((N_CA, N_CA), dtype=bool) # array of ones for the synapses within the cell assembly + block_CA_outgoing = np.ones((N_CA, N_control), dtype=bool) # array of ones for the synapses outgoing from the cell assembly + block_CA_incoming = np.ones((N_control, N_CA), dtype=bool) # array of ones for the synapses incoming to the cell assembly + block_control = np.ones((N_control, N_control), dtype=bool) # array of ones for the synapses within the control subpopulation + + mask_CA_within = np.append(np.append(block_CA_within, np.logical_not(block_CA_outgoing), axis=1), \ + np.logical_not(np.append(block_CA_incoming, block_control, axis=1)), + axis=0) # binary mask defining the synapses within the cell assembly + mask_CA_outgoing = np.append(np.append(np.logical_not(block_CA_within), block_CA_outgoing, axis=1), \ + np.logical_not(np.append(block_CA_incoming, block_control, axis=1)), + axis=0) # binary mask defining the synapses outgoing from the cell assembly + mask_CA_incoming = np.append(np.logical_not(np.append(block_CA_within, block_CA_outgoing, axis=1)), \ + np.append(block_CA_incoming, np.logical_not(block_control), axis=1), + axis=0) # binary mask defining the synapses incoming to the cell assembly + mask_control = np.append(np.logical_not(np.append(block_CA_within, block_CA_outgoing, axis=1)), \ + np.append(np.logical_not(block_CA_incoming), block_control, axis=1), + axis=0) # binary mask defining the synapses within the control subpopulation + + # early-phase weights + '''h_CA_within = h[mask_CA_within] + h_CA_outgoing = h[mask_CA_outgoing] + h_CA_incoming = h[mask_CA_incoming] + h_control = h[mask_control]''' + h_CA_within = h[np.logical_and(connections, mask_CA_within)] + h_CA_outgoing = h[np.logical_and(connections, mask_CA_outgoing)] + h_CA_incoming = h[np.logical_and(connections, mask_CA_incoming)] + h_control = h[np.logical_and(connections, mask_control)] + + # late-phase weights + '''z_CA_within = z[mask_CA_within] + z_CA_outgoing = z[mask_CA_outgoing] + z_CA_incoming = z[mask_CA_incoming] + z_control = z[mask_control]''' + z_CA_within = z[np.logical_and(connections, mask_CA_within)] + z_CA_outgoing = z[np.logical_and(connections, mask_CA_outgoing)] + z_CA_incoming = z[np.logical_and(connections, mask_CA_incoming)] + z_control = z[np.logical_and(connections, mask_control)] + + # total synaptic weights + w_CA_within = h_CA_within + z_CA_within + w_CA_outgoing = h_CA_outgoing + z_CA_outgoing + w_CA_incoming = h_CA_incoming + z_CA_incoming + w_control = h_control + z_control + + # firing rates + v_CA = v.flatten()[np.in1d(all, core)] + v_control = v.flatten()[np.logical_not(np.in1d(all, core))] + + # discretization of the distribution + if bins is None: + binh = np.linspace(np.min(h), np.max(h), 101, endpoint=True) # create range of bins for marginalProbDist(h...) + binz = np.linspace(np.min(z), np.max(z), 101, endpoint=True) # create range of bins for marginalProbDist(z...) + binw = np.linspace(np.min(w), np.max(w), 101, endpoint=True) # create range of bins for marginalProbDist(w...) + binv = np.linspace(np.min(v), np.max(v), 101, endpoint=True) # create range of bins for marginalProbDist(v...) + else: + [binh, binz, binw, binv] = bins # use pre-defined bins + + hstep = binh[1]-binh[0] + zstep = binz[1]-binz[0] + wstep = binw[1]-binw[0] + vstep = binv[1]-binv[0] + valh = np.delete(binh, -1) + hstep/2 # use mean values instead of lower bounds of the bins as values + valz = np.delete(binz, -1) + zstep/2 # use mean values instead of lower bounds of the bins as values + valw = np.delete(binw, -1) + wstep/2 # use mean values instead of lower bounds of the bins as values + valv = np.delete(binv, -1) + vstep/2 # use mean values instead of lower bounds of the bins as values + + # normalization of the distribution + if norm_all: + norm_value_w = np.sum(connections) # normalization factor for weights (number of all connections) + norm_value_v = N_CA + N_control # normalization factor for activities (number of all neurons) + else: + norm_value_w = None # use default (normalization across each subpopulation individually) + norm_value_v = None # use default (normalization across CA and control individually) + + buf, ph_CA_within = marginalProbDist(h_CA_within, binning = True, bin_edges = binh, norm = norm_value_w) + buf, ph_CA_outgoing = marginalProbDist(h_CA_outgoing, binning = True, bin_edges = binh, norm = norm_value_w) + buf, ph_CA_incoming = marginalProbDist(h_CA_incoming, binning = True, bin_edges = binh, norm = norm_value_w) + buf, ph_control = marginalProbDist(h_control, binning = True, bin_edges = binh, norm = norm_value_w) + buf, pz_CA_within = marginalProbDist(z_CA_within, binning = True, bin_edges = binz, norm = norm_value_w) + buf, pz_CA_outgoing = marginalProbDist(z_CA_outgoing, binning = True, bin_edges = binz, norm = norm_value_w) + buf, pz_CA_incoming = marginalProbDist(z_CA_incoming, binning = True, bin_edges = binz, norm = norm_value_w) + buf, pz_control = marginalProbDist(z_control, binning = True, bin_edges = binz, norm = norm_value_w) + buf, pw_CA_within = marginalProbDist(w_CA_within, binning = True, bin_edges = binw, norm = norm_value_w) + buf, pw_CA_outgoing = marginalProbDist(w_CA_outgoing, binning = True, bin_edges = binw, norm = norm_value_w) + buf, pw_CA_incoming = marginalProbDist(w_CA_incoming, binning = True, bin_edges = binw, norm = norm_value_w) + buf, pw_control = marginalProbDist(w_control, binning = True, bin_edges = binw, norm = norm_value_w) + buf, pv_CA = marginalProbDist(v_CA, binning = True, bin_edges = binv, norm = norm_value_v) + buf, pv_control = marginalProbDist(v_control, binning = True, bin_edges = binv, norm = norm_value_v) + + # write early-phase weight distribution to file + f = open(timestamp + "_eweight_dist_" + time_for_readout + add + ".txt", "w") + for i in range(len(valh)): + f.write(str(valh[i]) + "\t\t" + str(ph_CA_within[i]) + "\t\t" + str(ph_CA_outgoing[i]) + "\t\t" + \ + str(ph_CA_incoming[i]) + "\t\t" + str(ph_control[i]) + "\n") + f.close() + + # write late-phase weight distribution to file + f = open(timestamp + "_lweight_dist_" + time_for_readout + add + ".txt", "w") + for i in range(len(valz)): + f.write(str(valz[i]) + "\t\t" + str(pz_CA_within[i]) + "\t\t" + str(pz_CA_outgoing[i]) + "\t\t" + \ + str(pz_CA_incoming[i]) + "\t\t" + str(pz_control[i]) + "\n") + f.close() + + # write distribution of total synaptic weights to file + f = open(timestamp + "_totweight_dist_" + time_for_readout + add + ".txt", "w") + for i in range(len(valw)): + f.write(str(valw[i]) + "\t\t" + str(pw_CA_within[i]) + "\t\t" + str(pw_CA_outgoing[i]) + "\t\t" + \ + str(pw_CA_incoming[i]) + "\t\t" + str(pw_control[i]) + "\n") + f.close() + + # write activity distribution to file + f = open(timestamp + "_act_dist_" + time_for_readout + add + ".txt", "w") + for i in range(len(valv)): + f.write(str(valv[i]) + "\t\t" + str(pv_CA[i]) + "\t\t" + str(pv_control[i]) + "\n") + f.close() + + # write gnuplot script + f = open(timestamp + "_plot_dist.gpl", "w") + f.write("### DO NOT EDIT THIS FILE! IT WILL BE OVERWRITTEN. ###\n\n" + \ + "#set terminal png size 1024,640 enhanced\nset terminal pdf enhanced\n\n" + \ + "#set style fill transparent solid 0.8 noborder # for 'boxes' style\n" + \ + "#set style fill transparent pattern 4 bo # for 'boxes' style\n" + \ + "set log y\nset format y \"%.0e\"\nset yrange [3e-06:1]\nset key outside\n\n" + \ + "h_0 = " + str(h_0) + "\n" +\ + "epsilon = " + str(epsilon) + "\n\n") + + # plotting of early-phase weight distribution + f.write("set output \"" + timestamp + "_eweight_dist_" + time_for_readout + add + ".pdf\"\n") + f.write("set xrange [" + str(binh[0]-10*hstep) + "/h_0:" + str(binh[-1]+10*hstep) + "/h_0]\n") + f.write("set xlabel \"Early-phase weight / h_0\"\nset ylabel \"Relative frequency\"\n") + f.write("plot \"" + timestamp + "_eweight_dist_" + time_for_readout + add + ".txt\" using ($1/h_0):($2 > 0 ? $2 : epsilon) t \"CA\" with histeps, \\\n" + \ + " \"\" using ($1/h_0):($3 > 0 ? $3 : epsilon) t \"outgoing\" with histeps, \\\n" + \ + " \"\" using ($1/h_0):($4 > 0 ? $4 : epsilon) t \"incoming\" with histeps, \\\n" + \ + " \"\" using ($1/h_0):($5 > 0 ? $5 : epsilon) t \"control\" with histeps\n") + + # plotting of late-phase weight distribution + f.write("\nset output \"" + timestamp + "_lweight_dist_" + time_for_readout + add + ".pdf\"\n") + f.write("set xrange [" + str(binz[0]-10*zstep) + "/h_0:" + str(binz[-1]+10*zstep) + "/h_0]\n") + f.write("set xlabel \"Late-phase weight / h_0\"\nset ylabel \"Relative frequency\"\nset format y \"%.0e\"\n") + f.write("plot \"" + timestamp + "_lweight_dist_" + time_for_readout + add + ".txt\" using ($1/h_0):($2 > 0 ? $2 : epsilon) t \"CA\" with histeps, \\\n" + \ + " \"\" using ($1/h_0):($3 > 0 ? $3 : epsilon) t \"outgoing\" with histeps, \\\n" + \ + " \"\" using ($1/h_0):($4 > 0 ? $4 : epsilon) t \"incoming\" with histeps, \\\n" + \ + " \"\" using ($1/h_0):($5 > 0 ? $5 : epsilon) t \"control\" with histeps\n") + + # plotting of total weight distribution + f.write("\nset output \"" + timestamp + "_totweight_dist_" + time_for_readout + add + ".pdf\"\n") + f.write("set xrange [" + str(binw[0]-10*wstep) + "/h_0*100:" + str(binw[-1]+10*wstep) + "/h_0*100]\n") + f.write("set xlabel \"Total synaptic weight (%)\"\nset ylabel \"Relative frequency\"\nset format y \"%.0e\"\n") + f.write("plot \"" + timestamp + "_totweight_dist_" + time_for_readout + add + ".txt\" using ($1/h_0*100):($2 > 0 ? $2 : epsilon) t \"CA\" with histeps, \\\n" + \ + " \"\" using ($1/h_0*100):($3 > 0 ? $3 : epsilon) t \"outgoing\" with histeps, \\\n" + \ + " \"\" using ($1/h_0*100):($4 > 0 ? $4 : epsilon) t \"incoming\" with histeps, \\\n" + \ + " \"\" using ($1/h_0*100):($5 > 0 ? $5 : epsilon) t \"control\" with histeps\n") + + # plotting of activity distribution + f.write("\nset output \"" + timestamp + "_act_dist_" + time_for_readout + add + ".pdf\"\n") + f.write("set xrange [" + str(binv[0]-10*vstep) + ":" + str(binv[-1]+10*vstep) + "]\n") + f.write("set xlabel \"Neuronal firing rate (Hz)\"\nset ylabel \"Relative frequency\"\nset format y \"%.0e\"\n") + f.write("plot \"" + timestamp + "_act_dist_" + time_for_readout + add + ".txt\" using 1:($2 > 0 ? $2 : epsilon) t \"CA\" with histeps, \\\n" + \ + " \"\" using 1:($3 > 0 ? $3 : epsilon) t \"control\" with histeps\n\n") + + f.close() + + call(["gnuplot", timestamp + "_plot_dist.gpl"]) + os.chdir(orgdir) # change back to original directory + +# plotWeightDistributions3CAs +# Creates data and plot files of the weight distribution of a network with 2 or 3, possibly overlapping, assemblies at a given time +# nppath: path to the network_plots directory to read the data from +# timestamp: a string containing date and time (to access correct paths) +# add: additional descriptor +# Nl_exc: the number of excitatory neurons in one line of a quadratic grid +# time_for_readout: the time that at which the weights shall be read out +# coreA: array of indices of the first cell assembly (core) neurons +# coreB [optional]: array of indices of the second cell assembly (core) neurons +# coreC [optional]: array of indices of the third cell assembly (core) neurons +# h_0 [optional]: the initial weight, and normalization factor for z +# bins [optional]: list of three arrays, each containing the bins for one of the four quantities +def plotWeightDistributions3CAs(nppath, timestamp, add, Nl_exc, time_for_readout, coreA, coreB = None, coreC = None, h_0 = 0.420075, bins = None): + + orgdir = os.getcwd() # store the current working directory + + # "any" case: not looking for a specific timestamp, but for any data with a certain time_for_readout in the given directory + if timestamp == "any": + + rawpaths = Path(nppath) + for x in rawpaths.iterdir(): + tmppath = os.path.split(str(x))[1] # remove head from path + if ("_net_" + time_for_readout + ".txt") in tmppath: + timestamp = tmppath.split("_net_")[0] + plotWeightDistributions3CAs(nppath, timestamp, add, Nl_exc, time_for_readout, coreA, coreB, coreC, h_0, bins) # call this function again, now with specific timestamp; bins should be provided by calling function + return + + # read data from file [timestamp]_net_[time_for_readout].txt + os.chdir(nppath) # change to data directory + try: + connections, h, z, v = readWeightMatrixData(timestamp + "_net_" + time_for_readout + ".txt", Nl_exc) + z = h_0*z # normalize z + w = h + z # compute total synaptic weight + except ValueError: + raise + except OSError: + raise + + # determine synapses within the cell assemblies + N_tot = Nl_exc**2 # total number of neurons + + mask_coreA = np.zeros((N_tot, N_tot), dtype=bool) + for syn_pre in coreA: + for syn_post in coreA: + if connections[syn_pre,syn_post]: # NEW, TEST + mask_coreA[syn_pre,syn_post] = True + + mask_coreB = np.zeros((N_tot, N_tot), dtype=bool) + if coreB is not None: + for syn_pre in coreB: + for syn_post in coreB: + if connections[syn_pre,syn_post]: # NEW, TEST + mask_coreB[syn_pre,syn_post] = True + + mask_coreC = np.zeros((N_tot, N_tot), dtype=bool) + if coreC is not None: + for syn_pre in coreC: + for syn_post in coreC: + if connections[syn_pre,syn_post]: # NEW, TEST + mask_coreC[syn_pre,syn_post] = True + + # find control synapses (all synapses that are not within a cell assembly) + mask_control = np.logical_and(connections, np.logical_not(np.logical_or(mask_coreA, np.logical_or(mask_coreB, mask_coreC)))) + + # synapses outgoing from A // TODO + #block_outgoing_A = np.ones((len(coreA), N_tot-len(coreA)), dtype=bool) # array of ones for the synapses outgoing from assembly A + #mask_A_to_B = np.logical_not(np.logical_or(mask_coreA, np.logical_or(mask_coreB, mask_coreC))) + #mask_A_to_C = np.logical_not(np.logical_or(mask_coreA, np.logical_or(mask_coreB, mask_coreC))) + #mask_A_to_ctrl = np.logical_not(np.logical_or(mask_coreA, np.logical_or(mask_coreB, mask_coreC))) + + # synapses outgoing from B // TODO + #mask_B_to_A = np.logical_not(np.logical_or(mask_coreA, np.logical_or(mask_coreB, mask_coreC))) + #mask_B_to_C = np.logical_not(np.logical_or(mask_coreA, np.logical_or(mask_coreB, mask_coreC))) + #mask_B_to_ctrl = np.logical_not(np.logical_or(mask_coreA, np.logical_or(mask_coreB, mask_coreC))) + + # synapses outgoing from C // TODO + #mask_C_to_A = np.logical_not(np.logical_or(mask_coreA, np.logical_or(mask_coreB, mask_coreC))) + #mask_C_to_B = np.logical_not(np.logical_or(mask_coreA, np.logical_or(mask_coreB, mask_coreC))) + #mask_C_to_ctrl = np.logical_not(np.logical_or(mask_coreA, np.logical_or(mask_coreB, mask_coreC))) + + # synapses incoming... // TODO + + # find exclusive intersections + mask_I_AB = np.logical_and( np.logical_and(mask_coreA, mask_coreB), np.logical_not(mask_coreC) ) + mask_I_AC = np.logical_and( np.logical_and(mask_coreA, mask_coreC), np.logical_not(mask_coreB) ) + mask_I_BC = np.logical_and( np.logical_and(mask_coreB, mask_coreC), np.logical_not(mask_coreA) ) + mask_I_ABC = np.logical_and( mask_coreA, np.logical_and(mask_coreB, mask_coreC) ) + + # remove intersections from exclusive cores + mask_coreA = np.logical_and(mask_coreA, \ + np.logical_and(np.logical_not(mask_I_AB), \ + np.logical_and(np.logical_not(mask_I_AC), np.logical_not(mask_I_ABC)))) + mask_coreB = np.logical_and(mask_coreB, \ + np.logical_and(np.logical_not(mask_I_AB), \ + np.logical_and(np.logical_not(mask_I_BC), np.logical_not(mask_I_ABC)))) + mask_coreC = np.logical_and(mask_coreC, \ + np.logical_and(np.logical_not(mask_I_AC), \ + np.logical_and(np.logical_not(mask_I_BC), np.logical_not(mask_I_ABC)))) + + # tests (each should yield true) + #print("Test:", not np.any(np.logical_and(mask_coreA, mask_coreB))) + #print("Test:", not np.any(np.logical_and(mask_coreA, mask_coreC))) + #print("Test:", not np.any(np.logical_and(mask_coreB, mask_coreC))) + #print("Test:", not np.any(np.logical_and(mask_I_AB, mask_I_BC))) + #print("Test:", not np.any(np.logical_and(mask_I_AB, mask_I_AC))) + #print("Test:", not np.any(np.logical_and(mask_I_AB, mask_I_ABC))) + #print("Test:", not np.any(np.logical_and(mask_I_AC, mask_I_BC))) + #print("Test:", not np.any(np.logical_and(mask_I_AC, mask_I_ABC))) + #print("Test:", not np.any(np.logical_and(mask_I_BC, mask_I_ABC))) + #print("Test:", not np.any(np.logical_and(mask_control, mask_coreA))) + #print("Test:", not np.any(np.logical_and(mask_control, mask_coreB))) + #print("Test:", not np.any(np.logical_and(mask_control, mask_coreC))) + #print("Test:", not np.any(np.logical_and(mask_control, mask_I_AB))) + #print("Test:", not np.any(np.logical_and(mask_control, mask_I_AC))) + #print("Test:", not np.any(np.logical_and(mask_control, mask_I_BC))) + #print("Test:", not np.any(np.logical_and(mask_control, mask_I_ABC))) + #print("Test:", not np.any(np.logical_and(mask_coreA, mask_I_AB))) + #print("Test:", not np.any(np.logical_and(mask_coreA, mask_I_AC))) + #print("Test:", not np.any(np.logical_and(mask_coreA, mask_I_BC))) + #print("Test:", not np.any(np.logical_and(mask_coreA, mask_I_ABC))) + #print("Test:", not np.any(np.logical_and(mask_coreB, mask_I_AB))) + #print("Test:", not np.any(np.logical_and(mask_coreB, mask_I_AC))) + #print("Test:", not np.any(np.logical_and(mask_coreB, mask_I_BC))) + #print("Test:", not np.any(np.logical_and(mask_coreB, mask_I_ABC))) + #print("Test:", not np.any(np.logical_and(mask_coreC, mask_I_AB))) + #print("Test:", not np.any(np.logical_and(mask_coreC, mask_I_AC))) + #print("Test:", not np.any(np.logical_and(mask_coreC, mask_I_BC))) + #print("Test:", not np.any(np.logical_and(mask_coreC, mask_I_ABC))) + + # early-phase weights + h_coreA = h[mask_coreA] + h_coreB = h[mask_coreB] + h_coreC = h[mask_coreC] + h_I_AB = h[mask_I_AB] + h_I_AC = h[mask_I_AC] + h_I_BC = h[mask_I_BC] + h_I_ABC = h[mask_I_ABC] + h_control = h[mask_control] + + # late-phase weights + z_coreA = z[mask_coreA] + z_coreB = z[mask_coreB] + z_coreC = z[mask_coreC] + z_I_AB = z[mask_I_AB] + z_I_AC = z[mask_I_AC] + z_I_BC = z[mask_I_BC] + z_I_ABC = z[mask_I_ABC] + z_control = z[mask_control] + + # total synaptic weights + w_coreA = h_coreA + z_coreA + w_coreB = h_coreB + z_coreB + w_coreC = h_coreC + z_coreC + w_I_AB = h_I_AB + z_I_AB + w_I_AC = h_I_AC + z_I_AC + w_I_BC = h_I_BC + z_I_BC + w_I_ABC = h_I_ABC + z_I_ABC + w_control = h_control + z_control + + # mean and standard deviation of the subpopulations (to compare to values from adjacencyFunctionsAttractors.py) + #mean_z_coreA = np.mean(z_coreA) + #mean_z_coreB = np.mean(z_coreB) + #mean_z_coreC = np.mean(z_coreC) + #mean_z_I_AB = np.mean(z_I_AB) + #mean_z_I_AC = np.mean(z_I_AC) + #mean_z_I_BC = np.mean(z_I_BC) + #mean_z_I_ABC = np.mean(z_I_ABC) + #mean_z_control = np.mean(z_control) + #sd_z_coreA = np.std(z_coreA) + #sd_z_coreB = np.std(z_coreB) + #sd_z_coreC = np.std(z_coreC) + #sd_z_I_AB = np.std(z_I_AB) + #sd_z_I_AC = np.std(z_I_AC) + #sd_z_I_BC = np.std(z_I_BC) + #sd_z_I_ABC = np.std(z_I_ABC) + #sd_z_control = np.std(z_control) + + # discretization of the distribution + if bins is None: + binh = np.linspace(np.min(h[connections]), np.max(h), 101, endpoint=True) # create range of bins for marginalProbDist(h...) + binz = np.linspace(np.min(z[connections]), np.max(z), 101, endpoint=True) # create range of bins for marginalProbDist(z...) + binw = np.linspace(np.min(w[connections]), np.max(w), 101, endpoint=True) # create range of bins for marginalProbDist(w...) + else: + [binh, binz, binw] = bins # use pre-defined bins + + hstep = binh[1]-binh[0] + zstep = binz[1]-binz[0] + wstep = binw[1]-binw[0] + valh = np.delete(binh, -1) + hstep/2 # use mean values instead of lower bounds of the bins as values + valz = np.delete(binz, -1) + zstep/2 # use mean values instead of lower bounds of the bins as values + valw = np.delete(binw, -1) + wstep/2 # use mean values instead of lower bounds of the bins as values + + numconn = len(h[connections]) # normalization constant + + buf, ph_coreA = marginalProbDist(h_coreA, binning = True, bin_edges = binh, norm = numconn) + buf, pz_coreA = marginalProbDist(z_coreA, binning = True, bin_edges = binz, norm = numconn) + buf, pw_coreA = marginalProbDist(w_coreA, binning = True, bin_edges = binw, norm = numconn) + if coreB is not None: + buf, ph_coreB = marginalProbDist(h_coreB, binning = True, bin_edges = binh, norm = numconn) + buf, pz_coreB = marginalProbDist(z_coreB, binning = True, bin_edges = binz, norm = numconn) + buf, pw_coreB = marginalProbDist(w_coreB, binning = True, bin_edges = binw, norm = numconn) + if h_I_AB.size > 0: + buf, ph_I_AB = marginalProbDist(h_I_AB, binning = True, bin_edges = binh, norm = numconn) + buf, pz_I_AB = marginalProbDist(z_I_AB, binning = True, bin_edges = binz, norm = numconn) + buf, pw_I_AB = marginalProbDist(w_I_AB, binning = True, bin_edges = binw, norm = numconn) + if coreC is not None: + buf, ph_coreC = marginalProbDist(h_coreC, binning = True, bin_edges = binh, norm = numconn) + buf, pz_coreC = marginalProbDist(z_coreC, binning = True, bin_edges = binz, norm = numconn) + buf, pw_coreC = marginalProbDist(w_coreC, binning = True, bin_edges = binw, norm = numconn) + if h_I_AC.size > 0: + buf, ph_I_AC = marginalProbDist(h_I_AC, binning = True, bin_edges = binh, norm = numconn) + buf, pz_I_AC = marginalProbDist(z_I_AC, binning = True, bin_edges = binz, norm = numconn) + buf, pw_I_AC = marginalProbDist(w_I_AC, binning = True, bin_edges = binw, norm = numconn) + if h_I_BC.size > 0: + buf, ph_I_BC = marginalProbDist(h_I_BC, binning = True, bin_edges = binh, norm = numconn) + buf, pz_I_BC = marginalProbDist(z_I_BC, binning = True, bin_edges = binz, norm = numconn) + buf, pw_I_BC = marginalProbDist(w_I_BC, binning = True, bin_edges = binw, norm = numconn) + if h_I_ABC.size > 0: + buf, ph_I_ABC = marginalProbDist(h_I_ABC, binning = True, bin_edges = binh, norm = numconn) + buf, pz_I_ABC = marginalProbDist(z_I_ABC, binning = True, bin_edges = binz, norm = numconn) + buf, pw_I_ABC = marginalProbDist(w_I_ABC, binning = True, bin_edges = binw, norm = numconn) + buf, ph_control = marginalProbDist(h_control, binning = True, bin_edges = binh, norm = numconn) + buf, pz_control = marginalProbDist(z_control, binning = True, bin_edges = binz, norm = numconn) + buf, pw_control = marginalProbDist(w_control, binning = True, bin_edges = binw, norm = numconn) + + # Write weight distributions to files + fh = open(timestamp + "_eweight_dist_" + time_for_readout + add + ".txt", "w") + fz = open(timestamp + "_lweight_dist_" + time_for_readout + add + ".txt", "w") + fw = open(timestamp + "_totweight_dist_" + time_for_readout + add + ".txt", "w") + + for i in range(len(valh)): + fh.write(str(valh[i]) + "\t\t" + str(ph_coreA[i]) + "\t\t") + fz.write(str(valz[i]) + "\t\t" + str(pz_coreA[i]) + "\t\t") + fw.write(str(valw[i]) + "\t\t" + str(pw_coreA[i]) + "\t\t") + + if coreB is not None: + fh.write(str(ph_coreB[i]) + "\t\t") + fz.write(str(pz_coreB[i]) + "\t\t") + fw.write(str(pw_coreB[i]) + "\t\t") + + if h_I_AB.size > 0: + fh.write(str(ph_I_AB[i]) + "\t\t") + fz.write(str(pz_I_AB[i]) + "\t\t") + fw.write(str(pw_I_AB[i]) + "\t\t") + else: + fh.write("nan\t\t") + fz.write("nan\t\t") + fw.write("nan\t\t") + + if coreC is not None: + fh.write(str(ph_coreC[i]) + "\t\t") + fz.write(str(pz_coreC[i]) + "\t\t") + fw.write(str(pw_coreC[i]) + "\t\t") + + if h_I_AC.size > 0: + fh.write(str(ph_I_AC[i]) + "\t\t") + fz.write(str(pz_I_AC[i]) + "\t\t") + fw.write(str(pw_I_AC[i]) + "\t\t") + else: + fh.write("nan\t\t") + fz.write("nan\t\t") + fw.write("nan\t\t") + + if h_I_BC.size > 0: + fh.write(str(ph_I_BC[i]) + "\t\t") + fz.write(str(pz_I_BC[i]) + "\t\t") + fw.write(str(pw_I_BC[i]) + "\t\t") + else: + fh.write("nan\t\t") + fz.write("nan\t\t") + fw.write("nan\t\t") + + if h_I_ABC.size > 0: + fh.write(str(ph_I_ABC[i]) + "\t\t") + fz.write(str(pz_I_ABC[i]) + "\t\t") + fw.write(str(pw_I_ABC[i]) + "\t\t") + else: + fh.write("nan\t\t") + fz.write("nan\t\t") + fw.write("nan\t\t") + else: + fh.write("nan\t\tnan\t\tnan\t\tnan\t\t") + fz.write("nan\t\tnan\t\tnan\t\tnan\t\t") + fw.write("nan\t\tnan\t\tnan\t\tnan\t\t") + else: + fh.write("nan\t\tnan\t\tnan\t\tnan\t\tnan\t\tnan\t\t") + fz.write("nan\t\tnan\t\tnan\t\tnan\t\tnan\t\tnan\t\t") + fw.write("nan\t\tnan\t\tnan\t\tnan\t\tnan\t\tnan\t\t") + + fh.write(str(ph_control[i]) + "\n") + fz.write(str(pz_control[i]) + "\n") + fw.write(str(pw_control[i]) + "\n") + + fh.close() + fz.close() + fw.close() + + # write gnuplot script + f = open(timestamp + "_plot_dist.gpl", "w") + f.write("### DO NOT EDIT THIS FILE! IT WILL BE OVERWRITTEN. ###\n\n" + \ + "#set terminal png size 1024,640 enhanced\nset terminal pdf enhanced\n\n" + \ + "#set style fill transparent solid 0.8 noborder # for 'boxes' style\n" + \ + "#set style fill transparent pattern 4 bo # for 'boxes' style\n" + \ + "set log y\nset format y \"%.0e\"\nset yrange [3e-06:1]\nset key outside\n\n" + \ + "h_0 = " + str(h_0) + "\n" +\ + "epsilon = " + str(epsilon) + "\n\n") + + # plotting of early-phase weight distribution + f.write("set output \"" + timestamp + "_eweight_dist_" + time_for_readout + add + ".pdf\"\n") + f.write("set xrange [" + str(binh[0]-10*hstep) + "/h_0:" + str(binh[-1]+10*hstep) + "/h_0]\n") + f.write("set xlabel \"Early-phase weight / h_0\"\nset ylabel \"Relative frequency\"\n") + f.write("plot \"" + timestamp + "_eweight_dist_" + time_for_readout + add + ".txt\" using ($1/h_0):($2 > 0 ? $2 : epsilon) t \"Within ~A{.8\\\\~}\" lc 6 with histeps, \\\n") + if coreB is not None: + f.write("\"\" using ($1/h_0):($3 > 0 ? $3 : epsilon) t \"Within ~B{.8\\\\~}\" lc 7 with histeps, \\\n") + if coreC is not None: + f.write("\"\" using ($1/h_0):($5 > 0 ? $5 : epsilon) t \"Within ~C{.8\\\\~}\" lc 5 with histeps, \\\n") + if h_I_AB.size > 0: + f.write("\"\" using ($1/h_0):($4 > 0 ? $4 : epsilon) t \"Within I_{AB}\" lc 1 with histeps, \\\n") + if coreC is not None: + if h_I_AC.size > 0: + f.write("\"\" using ($1/h_0):($6 > 0 ? $6 : epsilon) t \"Within I_{AC}\" lc 2 with histeps, \\\n") + if h_I_BC.size > 0: + f.write("\"\" using ($1/h_0):($7 > 0 ? $7 : epsilon) t \"Within I_{BC}\" lc 4 with histeps, \\\n") + if h_I_ABC.size > 0: + f.write("\"\" using ($1/h_0):($8 > 0 ? $8 : epsilon) t \"Within I_{ABC}\" lc 3 with histeps, \\\n") + f.write("\"\" using ($1/h_0):($9 > 0 ? $9 : epsilon) t \"Others\" lc rgb \"#eeeeee\" with histeps\n") + + # plotting of late-phase weight distribution + f.write("\nset output \"" + timestamp + "_lweight_dist_" + time_for_readout + add + ".pdf\"\n") + f.write("set xrange [" + str(binz[0]-10*zstep) + "/h_0:" + str(binz[-1]+10*zstep) + "/h_0]\n") + f.write("set xlabel \"Late-phase weight / h_0\"\nset ylabel \"Relative frequency\"\nset format y \"%.0e\"\n") + f.write("plot \"" + timestamp + "_lweight_dist_" + time_for_readout + add + ".txt\" using ($1/h_0):($2 > 0 ? $2 : epsilon) t \"Within ~A{.8\\\\~}\" lc 6 with histeps, \\\n") + if coreB is not None: + f.write("\"\" using ($1/h_0):($3 > 0 ? $3 : epsilon) t \"Within ~B{.8\\\\~}\" lc 7 with histeps, \\\n") + if coreC is not None: + f.write("\"\" using ($1/h_0):($5 > 0 ? $5 : epsilon) t \"Within ~C{.8\\\\~}\" lc 5 with histeps, \\\n") + if h_I_AB.size > 0: + f.write("\"\" using ($1/h_0):($4 > 0 ? $4 : epsilon) t \"Within I_{AB}\" lc 1 with histeps, \\\n") + if coreC is not None: + if h_I_AC.size > 0: + f.write("\"\" using ($1/h_0):($6 > 0 ? $6 : epsilon) t \"Within I_{AC}\" lc 2 with histeps, \\\n") + if h_I_BC.size > 0: + f.write("\"\" using ($1/h_0):($7 > 0 ? $7 : epsilon) t \"Within I_{BC}\" lc 4 with histeps, \\\n") + if h_I_ABC.size > 0: + f.write("\"\" using ($1/h_0):($8 > 0 ? $8 : epsilon) t \"Within I_{ABC}\" lc 3 with histeps, \\\n") + f.write("\"\" using ($1/h_0):($9 > 0 ? $9 : epsilon) t \"Others\" lc rgb \"#eeeeee\" with histeps\n") + + # plotting of total weight distribution + f.write("\nset output \"" + timestamp + "_totweight_dist_" + time_for_readout + add + ".pdf\"\n") + f.write("set xrange [" + str(binw[0]-10*wstep) + "/h_0*100:" + str(binw[-1]+10*wstep) + "/h_0*100]\n") + f.write("set xlabel \"Total synaptic weight (%)\"\nset ylabel \"Relative frequency\"\nset format y \"%.0e\"\n") + f.write("plot \"" + timestamp + "_totweight_dist_" + time_for_readout + add + ".txt\" using ($1/h_0*100):($2 > 0 ? $2 : epsilon) t \"Within ~A{.8\\\\~}\" lc 6 with histeps, \\\n") + if coreB is not None: + f.write("\"\" using ($1/h_0*100):($3 > 0 ? $3 : epsilon) t \"Within ~B{.8\\\\~}\" lc 7 with histeps, \\\n") + if coreC is not None: + f.write("\"\" using ($1/h_0*100):($5 > 0 ? $5 : epsilon) t \"Within ~C{.8\\\\~}\" lc 5 with histeps, \\\n") + if h_I_AB.size > 0: + f.write("\"\" using ($1/h_0*100):($4 > 0 ? $4 : epsilon) t \"Within I_{AB}\" lc 1 with histeps, \\\n") + if coreC is not None: + if h_I_AC.size > 0: + f.write("\"\" using ($1/h_0*100):($6 > 0 ? $6 : epsilon) t \"Within I_{AC}\" lc 2 with histeps, \\\n") + if h_I_BC.size > 0: + f.write("\"\" using ($1/h_0*100):($7 > 0 ? $7 : epsilon) t \"Within I_{BC}\" lc 4 with histeps, \\\n") + if h_I_ABC.size > 0: + f.write("\"\" using ($1/h_0*100):($8 > 0 ? $8 : epsilon) t \"Within I_{ABC}\" lc 3 with histeps, \\\n") + f.write("\"\" using ($1/h_0*100):($9 > 0 ? $9 : epsilon) t \"Others\" lc rgb \"#eeeeee\" with histeps\n") + + f.close() + + call(["gnuplot", timestamp + "_plot_dist.gpl"]) + + + os.chdir(orgdir) # change back to original directory + +# mapToBins +# Adjusts the values of an array such that they match bins defined by another array +# a: array of values +# b: array of bin edges (including the terminal one) +# return: array of the shape of a with values discretized according to b +def mapToBins(a, b): + a = a.flatten() + + for i in range(len(a)): + + if a[i] < b[0] or a[i] > b[len(b)-1]: + raise ValueError("Value " + a[i] + " at index " + i + " is out of range.") + + for j in reversed(range(len(b)-1)): + if a[i] > b[j]: + a[i] = (b[j+1] + b[j]) / 2 + break + return a + +# marginalProbDist +# Computes a marginal probability distribution +# a: array of outcomes (e.g., array of activities of all neurons in a network or array of weights of all synapses in a network) +# binning [optional]: specifies if bins are used to discretize the values of the distribution +# bin_edges [optional]: pre-specified range of binning edges; only applicable if binning is used +# norm [optional]: value by which to normalize +# return: array of values (unless bin_edges were pre-defined) and corresponding array of their probabilities +def marginalProbDist(a, binning = False, bin_edges = None, norm = None): + + if binning == True: + if bin_edges is None: # generate bin_edges for 100 bins + bin_edges = np.linspace(np.min(h), np.max(h), 101, endpoint=True) # create array of bins (defined by their edges) + values_mean = np.delete(binh, -1) + (np.max(a)-np.min(a)) / 100 / 2 # use mean values instead of lower bounds as values of the bins + else: + values_mean = None + + freq_of_values = np.histogram(a, bin_edges)[0] # create histogram of activity value occurrences + #values_mean, freq_of_values = np.unique(mapToBins(a, bin_edges), return_counts=True, axis=0) # yields the same result but without empty bins + else: + values_mean, freq_of_values = np.unique(a, return_counts=True) # determine and count occuring activities + + if norm is None: + freq_of_values = freq_of_values / np.sum(freq_of_values) # normalize only over this quantity + else: + freq_of_values = freq_of_values / norm # normalize over given number (i.e., typically, over all values) + + return values_mean, freq_of_values + +# jointProbDist +# Computes a joint probability distribution +# a1: first array of outcomes (e.g., array of activities of all neurons in a network or array of weights of all synapses in a network) +# a2: second array of outcomes +# binning [optional]: specifies if bins are used to discretize the values of the distribution +# return: array of value pairs and corresponding array of their probabilities +def jointProbDist(a1, a2, binning = False): + + if binning == True: + ab = np.concatenate((a1,a2)) + + try: + values = np.arange(np.min(ab), np.max(ab), (np.max(ab)-np.min(ab)) / 100) # create array of activity value bins + except ValueError: + values = np.array([np.min(ab)]) + bin_edges = np.concatenate((values, np.max(ab)), axis=None) # add last edge + a1 = mapToBins(a1, bin_edges) + a2 = mapToBins(a2, bin_edges) + + ja = np.array([a1.flatten(), a2.flatten()]).T # array of pairs of the two outcomes (for one neuron or synapse or ... at two times) + + value_pairs, freq_of_values = np.unique(ja, return_counts=True, axis=0) # determine and count occurring activities + freq_of_values = freq_of_values / np.sum(freq_of_values) # normalize + else: + ja = np.array([a1.flatten(),a2.flatten()]).T # array of pairs of the two outcomes (for one neuron or synapse or ... at two times) + + value_pairs, freq_of_values = np.unique(ja, return_counts=True, axis=0) # determine and count occurring activities + freq_of_values = freq_of_values / np.sum(freq_of_values) # normalize + #dist = np.asarray(value_pairs, freq_of_values).T # create distribution + + return value_pairs, freq_of_values + +# main +# argv[]: comandline arguments +'''### examples: +if __name__ == "__main__": + + ###################################################################################################### + ## early- and late-phase weight distributions, four each, with overall normalization, + ## as used in Luboeinski and Tetzlaff, Commun. Biol., 2021 + + Nl_exc = 40 # number of excitatory neurons in one line of a square + h_0 = 0.420075 + core = np.arange(150) # size of the assembly + + if len(sys.argv) == 3: + + ts1 = str(sys.argv[1]) # timestamp for simulation data before consolidation + ts2 = str(sys.argv[2]) # timestamp for simulation data after consolidation + + # define bins (can also be done automatically, but then bins are different for each sample) + binh = np.linspace(0.00212467, 0.74913949, 101, endpoint=True) # bins for early-phase weights + binz = np.linspace(0.00000000, 0.33602527, 101, endpoint=True) # bins for late-phase weights + binw = np.linspace(0.00212467, 0.76000000, 101, endpoint=True) # bins for total synaptic weights + binv = np.linspace(0.0, 102.0, 51, endpoint=True) # bins for activities + bins = [binh, binz, binw, binv] + + # compute and plot distributions + plotDistributions("10s", ts1, "_150default", Nl_exc, "20.0", core, bins=bins, norm_all=True) # before 10s-recall + plotDistributions("10s", ts1, "_150default", Nl_exc, "20.1", core, bins=bins, norm_all=True) # after 10s-recall + plotDistributions("8h", ts2, "_150default", Nl_exc, "28810.0", core, bins=bins, norm_all=True) # before 8h-recall + plotDistributions("8h", ts2, "_150default", Nl_exc, "28810.1", core, bins=bins, norm_all=True) # after 8h-recall + + ###################################################################################################### + ## Network with 3 cell assemblies - weight distributions with automatic determination of bins + ## and without overall normalization + + Nl_exc = 50 # number of excitatory neurons in one line of a square + h_0 = 0.420075 + core_size = 600 + core1 = np.arange(core_size) + core2 = np.arange(core_size, 2*core_size) + core3 = np.arange(2*core_size, 3*core_size) + + minmax = findOverallMinMax(".", Nl_exc, "28810.0", h_0) + binh = np.linspace(minmax[0][0], minmax[0][1], 101, endpoint=True) # bins for early-phase weights + binz = np.linspace(minmax[1][0], minmax[1][1], 101, endpoint=True) # bins for late-phase weights + binw = np.linspace(minmax[2][0], minmax[2][1], 101, endpoint=True) # bins for total synaptic weights + bins = [binh, binz, binw] + np.save("bins_28810.0.npy", bins) + np.savetxt("minmax_28810.0.txt", minmax) + #bins = np.load("bins_28810.0.npy", allow_pickle=True) + + plotWeightDistributions3CAs("stdconn", "any", "", Nl_exc, "28810.0", core1, core2, core3, bins=bins) + plotWeightDistributions3CAs("altconn1", "any", "", Nl_exc, "28810.0", core1, core2, core3, bins=bins) + plotWeightDistributions3CAs("altconn2", "any", "", Nl_exc, "28810.0", core1, core2, core3, bins=bins) + plotWeightDistributions3CAs("altconn3", "any", "", Nl_exc, "28810.0", core1, core2, core3, bins=bins) + plotWeightDistributions3CAs("altconn4", "any", "", Nl_exc, "28810.0", core1, core2, core3, bins=bins) + plotWeightDistributions3CAs("altconn5", "any", "", Nl_exc, "28810.0", core1, core2, core3, bins=bins) + plotWeightDistributions3CAs("altconn6", "any", "", Nl_exc, "28810.0", core1, core2, core3, bins=bins) + plotWeightDistributions3CAs("altconn7", "any", "", Nl_exc, "28810.0", core1, core2, core3, bins=bins) + plotWeightDistributions3CAs("altconn8", "any", "", Nl_exc, "28810.0", core1, core2, core3, bins=bins) + plotWeightDistributions3CAs("altconn9", "any", "", Nl_exc, "28810.0", core1, core2, core3, bins=bins) +''' diff --git a/simulation-bin/run b/simulation-bin/run deleted file mode 100644 index 7de6ae4..0000000 --- a/simulation-bin/run +++ /dev/null @@ -1,4 +0,0 @@ -#!/bin/sh -rm -f saved_state.txt -./net.out -Nl_exc=40 -Nl_inh=20 -t_max=28820 -N_stim=25 -pc=0.1 -learn=TRIPLETf100 -recall=F100D1at28810.0 -w_ei=2 -w_ie=4.0 -w_ii=4.0 -I_0=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Recall after 8h" - diff --git a/simulation-bin/run2 b/simulation-bin/run2 deleted file mode 100644 index 1ab8102..0000000 --- a/simulation-bin/run2 +++ /dev/null @@ -1,5 +0,0 @@ -#!/bin/sh -rm -f saved_state.txt -./net.out -Nl_exc=40 -Nl_inh=20 -t_max=30 -N_stim=25 -pc=0.1 -learn=TRIPLETf100 -recall=F100D1at20.0 -w_ei=2 -w_ie=4.0 -w_ii=4.0 -I_0=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Learning, 10s-recall" -mv -f saved_state0.txt saved_state.txt -./net.out -Nl_exc=40 -Nl_inh=20 -t_max=28820 -N_stim=25 -pc=0.1 -learn= -recall=F100D1at28810.0 -w_ei=2 -w_ie=4.0 -w_ii=4.0 -I_0=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Consolidation, 8h-recall" diff --git a/simulation-bin/run3 b/simulation-bin/run3 deleted file mode 100644 index 7a40c46..0000000 --- a/simulation-bin/run3 +++ /dev/null @@ -1,5 +0,0 @@ -#!/bin/sh -rm -f saved_state.txt -./net_irs.out -Nl_exc=40 -Nl_inh=20 -t_max=30 -N_stim=25 -pc=0.1 -learn=TRIPLETf100 -recall=F100D1at20.0 -w_ei=2 -w_ie=4.0 -w_ii=4.0 -I_0=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Learning, 10s-recall" -mv -f saved_state0.txt saved_state.txt -./net_irs.out -Nl_exc=40 -Nl_inh=20 -t_max=28820 -N_stim=25 -pc=0.1 -learn=F100D1at3610.0 -recall=F100D1at28810.0 -w_ei=2 -w_ie=4.0 -w_ii=4.0 -I_0=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Consolidation, interm. recall, 8h-recall" diff --git a/simulation-bin/2N1S.out b/simulation-bin/run_binary_paper1/2N1S.out similarity index 56% rename from simulation-bin/2N1S.out rename to simulation-bin/run_binary_paper1/2N1S.out index f40ae8a..ab562b5 100755 Binary files a/simulation-bin/2N1S.out and b/simulation-bin/run_binary_paper1/2N1S.out differ diff --git a/simulation-bin/connections.txt b/simulation-bin/run_binary_paper1/connections.txt similarity index 100% rename from simulation-bin/connections.txt rename to simulation-bin/run_binary_paper1/connections.txt diff --git a/simulation-bin/run_binary_paper1/net100.out b/simulation-bin/run_binary_paper1/net100.out new file mode 100755 index 0000000..d47297c Binary files /dev/null and b/simulation-bin/run_binary_paper1/net100.out differ diff --git a/simulation-bin/run_binary_paper1/net150.out b/simulation-bin/run_binary_paper1/net150.out new file mode 100755 index 0000000..5873426 Binary files /dev/null and b/simulation-bin/run_binary_paper1/net150.out differ diff --git a/simulation-bin/run_binary_paper1/net150_IRS.out b/simulation-bin/run_binary_paper1/net150_IRS.out new file mode 100755 index 0000000..2658448 Binary files /dev/null and b/simulation-bin/run_binary_paper1/net150_IRS.out differ diff --git a/simulation-bin/run_binary_paper1/net200.out b/simulation-bin/run_binary_paper1/net200.out new file mode 100755 index 0000000..918e099 Binary files /dev/null and b/simulation-bin/run_binary_paper1/net200.out differ diff --git a/simulation-bin/run_binary_paper1/net250.out b/simulation-bin/run_binary_paper1/net250.out new file mode 100755 index 0000000..327f90b Binary files /dev/null and b/simulation-bin/run_binary_paper1/net250.out differ diff --git a/simulation-bin/run_binary_paper1/net300.out b/simulation-bin/run_binary_paper1/net300.out new file mode 100755 index 0000000..1b56a33 Binary files /dev/null and b/simulation-bin/run_binary_paper1/net300.out differ diff --git a/simulation-bin/run_binary_paper1/net350.out b/simulation-bin/run_binary_paper1/net350.out new file mode 100755 index 0000000..31697db Binary files /dev/null and b/simulation-bin/run_binary_paper1/net350.out differ diff --git a/simulation-bin/run_binary_paper1/net400.out b/simulation-bin/run_binary_paper1/net400.out new file mode 100755 index 0000000..c71bcb2 Binary files /dev/null and b/simulation-bin/run_binary_paper1/net400.out differ diff --git a/simulation-bin/run_binary_paper1/net450.out b/simulation-bin/run_binary_paper1/net450.out new file mode 100755 index 0000000..502d9fa Binary files /dev/null and b/simulation-bin/run_binary_paper1/net450.out differ diff --git a/simulation-bin/run_binary_paper1/net50.out b/simulation-bin/run_binary_paper1/net50.out new file mode 100755 index 0000000..e714bab Binary files /dev/null and b/simulation-bin/run_binary_paper1/net50.out differ diff --git a/simulation-bin/net.out b/simulation-bin/run_binary_paper1/net500.out similarity index 58% rename from simulation-bin/net.out rename to simulation-bin/run_binary_paper1/net500.out index f561679..f5a37fe 100755 Binary files a/simulation-bin/net.out and b/simulation-bin/run_binary_paper1/net500.out differ diff --git a/simulation-bin/run_2N1S b/simulation-bin/run_binary_paper1/run_2N1S similarity index 100% rename from simulation-bin/run_2N1S rename to simulation-bin/run_binary_paper1/run_2N1S diff --git a/simulation-bin/run_binary_paper1/run_IRS b/simulation-bin/run_binary_paper1/run_IRS new file mode 100644 index 0000000..d93fd6e --- /dev/null +++ b/simulation-bin/run_binary_paper1/run_IRS @@ -0,0 +1,5 @@ +#!/bin/sh +rm -f saved_state.txt +./net150_IRS.out -Nl_exc=40 -Nl_inh=20 -t_max=30 -N_stim=25 -pc=0.1 -learn=TRIPLETf100 -recall=F100D1at20.0 -w_ei=2 -w_ie=4.0 -w_ii=4.0 -I_0=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Learning, 10s-recall" +mv -f saved_state0.txt saved_state.txt +./net150_IRS.out -Nl_exc=40 -Nl_inh=20 -t_max=28820 -N_stim=25 -pc=0.1 -learn=F100D1at3610.0 -recall=F100D1at28810.0 -w_ei=2 -w_ie=4.0 -w_ii=4.0 -I_0=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Consolidation, interm. recall, 8h-recall" diff --git a/simulation-bin/run_binary_paper1/run_full b/simulation-bin/run_binary_paper1/run_full new file mode 100644 index 0000000..493094b --- /dev/null +++ b/simulation-bin/run_binary_paper1/run_full @@ -0,0 +1,4 @@ +#!/bin/sh +rm -f saved_state.txt +./net150.out -Nl_exc=40 -Nl_inh=20 -t_max=28820 -N_stim=25 -pc=0.1 -learn=TRIPLETf100 -recall=F100D1at28810.0 -w_ei=2 -w_ie=4.0 -w_ii=4.0 -I_0=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -noff -purpose="Recall after 8h" + diff --git a/simulation-bin/run_binary_paper1/run_sizes b/simulation-bin/run_binary_paper1/run_sizes new file mode 100644 index 0000000..bcb79a0 --- /dev/null +++ b/simulation-bin/run_binary_paper1/run_sizes @@ -0,0 +1,53 @@ +#!/bin/sh +rm -f saved_state.txt +./net50.out -Nl_exc=40 -Nl_inh=20 -t_max=30 -N_stim=25 -pc=0.1 -learn=TRIPLETf100 -recall=F100D1at20.0 -w_ei=2 -w_ie=4.0 -w_ii=4.0 -I_0=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Learning, 10s-recall" +mv -f saved_state0.txt saved_state.txt +./net50.out -Nl_exc=40 -Nl_inh=20 -t_max=28820 -N_stim=25 -pc=0.1 -learn= -recall=F100D1at28810.0 -w_ei=2 -w_ie=4.0 -w_ii=4.0 -I_0=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Consolidation, 8h-recall" + +rm -f saved_state.txt +./net100.out -Nl_exc=40 -Nl_inh=20 -t_max=30 -N_stim=25 -pc=0.1 -learn=TRIPLETf100 -recall=F100D1at20.0 -w_ei=2 -w_ie=4.0 -w_ii=4.0 -I_0=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Learning, 10s-recall" +mv -f saved_state0.txt saved_state.txt +./net100.out -Nl_exc=40 -Nl_inh=20 -t_max=28820 -N_stim=25 -pc=0.1 -learn= -recall=F100D1at28810.0 -w_ei=2 -w_ie=4.0 -w_ii=4.0 -I_0=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Consolidation, 8h-recall" + +rm -f saved_state.txt +./net150.out -Nl_exc=40 -Nl_inh=20 -t_max=30 -N_stim=25 -pc=0.1 -learn=TRIPLETf100 -recall=F100D1at20.0 -w_ei=2 -w_ie=4.0 -w_ii=4.0 -I_0=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Learning, 10s-recall" +mv -f saved_state0.txt saved_state.txt +./net150.out -Nl_exc=40 -Nl_inh=20 -t_max=28820 -N_stim=25 -pc=0.1 -learn= -recall=F100D1at28810.0 -w_ei=2 -w_ie=4.0 -w_ii=4.0 -I_0=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Consolidation, 8h-recall" + +rm -f saved_state.txt +./net200.out -Nl_exc=40 -Nl_inh=20 -t_max=30 -N_stim=25 -pc=0.1 -learn=TRIPLETf100 -recall=F100D1at20.0 -w_ei=2 -w_ie=4.0 -w_ii=4.0 -I_0=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Learning, 10s-recall" +mv -f saved_state0.txt saved_state.txt +./net200.out -Nl_exc=40 -Nl_inh=20 -t_max=28820 -N_stim=25 -pc=0.1 -learn= -recall=F100D1at28810.0 -w_ei=2 -w_ie=4.0 -w_ii=4.0 -I_0=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Consolidation, 8h-recall" + +rm -f saved_state.txt +./net250.out -Nl_exc=40 -Nl_inh=20 -t_max=30 -N_stim=25 -pc=0.1 -learn=TRIPLETf100 -recall=F100D1at20.0 -w_ei=2 -w_ie=4.0 -w_ii=4.0 -I_0=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Learning, 10s-recall" +mv -f saved_state0.txt saved_state.txt +./net250.out -Nl_exc=40 -Nl_inh=20 -t_max=28820 -N_stim=25 -pc=0.1 -learn= -recall=F100D1at28810.0 -w_ei=2 -w_ie=4.0 -w_ii=4.0 -I_0=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Consolidation, 8h-recall" + +rm -f saved_state.txt +./net300.out -Nl_exc=40 -Nl_inh=20 -t_max=30 -N_stim=25 -pc=0.1 -learn=TRIPLETf100 -recall=F100D1at20.0 -w_ei=2 -w_ie=4.0 -w_ii=4.0 -I_0=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Learning, 10s-recall" +mv -f saved_state0.txt saved_state.txt +./net300.out -Nl_exc=40 -Nl_inh=20 -t_max=28820 -N_stim=25 -pc=0.1 -learn= -recall=F100D1at28810.0 -w_ei=2 -w_ie=4.0 -w_ii=4.0 -I_0=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Consolidation, 8h-recall" + +rm -f saved_state.txt +./net350.out -Nl_exc=40 -Nl_inh=20 -t_max=30 -N_stim=25 -pc=0.1 -learn=TRIPLETf100 -recall=F100D1at20.0 -w_ei=2 -w_ie=4.0 -w_ii=4.0 -I_0=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Learning, 10s-recall" +mv -f saved_state0.txt saved_state.txt +./net350.out -Nl_exc=40 -Nl_inh=20 -t_max=28820 -N_stim=25 -pc=0.1 -learn= -recall=F100D1at28810.0 -w_ei=2 -w_ie=4.0 -w_ii=4.0 -I_0=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Consolidation, 8h-recall" + +rm -f saved_state.txt +./net400.out -Nl_exc=40 -Nl_inh=20 -t_max=30 -N_stim=25 -pc=0.1 -learn=TRIPLETf100 -recall=F100D1at20.0 -w_ei=2 -w_ie=4.0 -w_ii=4.0 -I_0=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Learning, 10s-recall" +mv -f saved_state0.txt saved_state.txt +./net400.out -Nl_exc=40 -Nl_inh=20 -t_max=28820 -N_stim=25 -pc=0.1 -learn= -recall=F100D1at28810.0 -w_ei=2 -w_ie=4.0 -w_ii=4.0 -I_0=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Consolidation, 8h-recall" + +rm -f saved_state.txt +./net450.out -Nl_exc=40 -Nl_inh=20 -t_max=30 -N_stim=25 -pc=0.1 -learn=TRIPLETf100 -recall=F100D1at20.0 -w_ei=2 -w_ie=4.0 -w_ii=4.0 -I_0=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Learning, 10s-recall" +mv -f saved_state0.txt saved_state.txt +./net450.out -Nl_exc=40 -Nl_inh=20 -t_max=28820 -N_stim=25 -pc=0.1 -learn= -recall=F100D1at28810.0 -w_ei=2 -w_ie=4.0 -w_ii=4.0 -I_0=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Consolidation, 8h-recall" + +rm -f saved_state.txt +./net500.out -Nl_exc=40 -Nl_inh=20 -t_max=30 -N_stim=25 -pc=0.1 -learn=TRIPLETf100 -recall=F100D1at20.0 -w_ei=2 -w_ie=4.0 -w_ii=4.0 -I_0=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Learning, 10s-recall" +mv -f saved_state0.txt saved_state.txt +./net500.out -Nl_exc=40 -Nl_inh=20 -t_max=28820 -N_stim=25 -pc=0.1 -learn= -recall=F100D1at28810.0 -w_ei=2 -w_ie=4.0 -w_ii=4.0 -I_0=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Consolidation, 8h-recall" + +rm -f saved_state.txt + diff --git a/simulation-bin/run_binary_paper2/activation/NOOVERLAP/run b/simulation-bin/run_binary_paper2/activation/NOOVERLAP/run new file mode 100644 index 0000000..3c63a5b --- /dev/null +++ b/simulation-bin/run_binary_paper2/activation/NOOVERLAP/run @@ -0,0 +1,15 @@ +#!/bin/sh + +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." + +python3 assemblyAvalancheStatistics.py "NOOVERLAP" 0.01 10 False + diff --git a/simulation-bin/run_binary_paper2/activation/OVERLAP10 no ABC/run b/simulation-bin/run_binary_paper2/activation/OVERLAP10 no ABC/run new file mode 100644 index 0000000..f141ac4 --- /dev/null +++ b/simulation-bin/run_binary_paper2/activation/OVERLAP10 no ABC/run @@ -0,0 +1,15 @@ +#!/bin/sh + +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." + +python3 assemblyAvalancheStatistics.py "OVERLAP10 no ABC" 0.01 10 False + diff --git a/simulation-bin/run_binary_paper2/activation/OVERLAP10 no AC, no ABC/run b/simulation-bin/run_binary_paper2/activation/OVERLAP10 no AC, no ABC/run new file mode 100644 index 0000000..406faf4 --- /dev/null +++ b/simulation-bin/run_binary_paper2/activation/OVERLAP10 no AC, no ABC/run @@ -0,0 +1,15 @@ +#!/bin/sh + +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." + +python3 assemblyAvalancheStatistics.py "OVERLAP10 no AC, no ABC" 0.01 10 False + diff --git a/simulation-bin/run_binary_paper2/activation/OVERLAP10 no BC, no ABC/run b/simulation-bin/run_binary_paper2/activation/OVERLAP10 no BC, no ABC/run new file mode 100644 index 0000000..0f2553e --- /dev/null +++ b/simulation-bin/run_binary_paper2/activation/OVERLAP10 no BC, no ABC/run @@ -0,0 +1,15 @@ +#!/bin/sh + +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." + +python3 assemblyAvalancheStatistics.py "OVERLAP10 no BC, no ABC" 0.01 10 False + diff --git a/simulation-bin/run_binary_paper2/activation/OVERLAP10/run b/simulation-bin/run_binary_paper2/activation/OVERLAP10/run new file mode 100644 index 0000000..d0ad253 --- /dev/null +++ b/simulation-bin/run_binary_paper2/activation/OVERLAP10/run @@ -0,0 +1,15 @@ +#!/bin/sh + +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." + +python3 assemblyAvalancheStatistics.py "OVERLAP10" 0.01 10 False + diff --git a/simulation-bin/run_binary_paper2/activation/OVERLAP15 no ABC/run b/simulation-bin/run_binary_paper2/activation/OVERLAP15 no ABC/run new file mode 100644 index 0000000..842db4c --- /dev/null +++ b/simulation-bin/run_binary_paper2/activation/OVERLAP15 no ABC/run @@ -0,0 +1,15 @@ +#!/bin/sh + +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." + +python3 assemblyAvalancheStatistics.py "OVERLAP15 no ABC" 0.01 10 False + diff --git a/simulation-bin/run_binary_paper2/activation/OVERLAP15 no AC, no ABC/run b/simulation-bin/run_binary_paper2/activation/OVERLAP15 no AC, no ABC/run new file mode 100644 index 0000000..e585444 --- /dev/null +++ b/simulation-bin/run_binary_paper2/activation/OVERLAP15 no AC, no ABC/run @@ -0,0 +1,15 @@ +#!/bin/sh + +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." + +python3 assemblyAvalancheStatistics.py "OVERLAP15 no AC, no ABC" 0.01 10 False + diff --git a/simulation-bin/run_binary_paper2/activation/OVERLAP15 no BC, no ABC/run b/simulation-bin/run_binary_paper2/activation/OVERLAP15 no BC, no ABC/run new file mode 100644 index 0000000..b352d53 --- /dev/null +++ b/simulation-bin/run_binary_paper2/activation/OVERLAP15 no BC, no ABC/run @@ -0,0 +1,15 @@ +#!/bin/sh + +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." + +python3 assemblyAvalancheStatistics.py "OVERLAP15 no BC, no ABC" 0.01 10 False + diff --git a/simulation-bin/run_binary_paper2/activation/OVERLAP15/run b/simulation-bin/run_binary_paper2/activation/OVERLAP15/run new file mode 100644 index 0000000..1354168 --- /dev/null +++ b/simulation-bin/run_binary_paper2/activation/OVERLAP15/run @@ -0,0 +1,15 @@ +#!/bin/sh + +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." + +python3 assemblyAvalancheStatistics.py "OVERLAP15" 0.01 10 False + diff --git a/simulation-bin/run_binary_paper2/activation/OVERLAP20 no ABC/run b/simulation-bin/run_binary_paper2/activation/OVERLAP20 no ABC/run new file mode 100644 index 0000000..bc21534 --- /dev/null +++ b/simulation-bin/run_binary_paper2/activation/OVERLAP20 no ABC/run @@ -0,0 +1,15 @@ +#!/bin/sh + +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." + +python3 assemblyAvalancheStatistics.py "OVERLAP20 no ABC" 0.01 10 False + diff --git a/simulation-bin/run_binary_paper2/activation/OVERLAP20 no AC, no ABC/run b/simulation-bin/run_binary_paper2/activation/OVERLAP20 no AC, no ABC/run new file mode 100644 index 0000000..a5e4975 --- /dev/null +++ b/simulation-bin/run_binary_paper2/activation/OVERLAP20 no AC, no ABC/run @@ -0,0 +1,15 @@ +#!/bin/sh + +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." + +python3 assemblyAvalancheStatistics.py "OVERLAP20 no AC, no ABC" 0.01 10 False + diff --git a/simulation-bin/run_binary_paper2/activation/OVERLAP20 no BC, no ABC/run b/simulation-bin/run_binary_paper2/activation/OVERLAP20 no BC, no ABC/run new file mode 100644 index 0000000..3f82f08 --- /dev/null +++ b/simulation-bin/run_binary_paper2/activation/OVERLAP20 no BC, no ABC/run @@ -0,0 +1,15 @@ +#!/bin/sh + +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." + +python3 assemblyAvalancheStatistics.py "OVERLAP20 no BC, no ABC" 0.01 10 False + diff --git a/simulation-bin/run_binary_paper2/activation/OVERLAP20/run b/simulation-bin/run_binary_paper2/activation/OVERLAP20/run new file mode 100644 index 0000000..e51a535 --- /dev/null +++ b/simulation-bin/run_binary_paper2/activation/OVERLAP20/run @@ -0,0 +1,15 @@ +#!/bin/sh + +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." + +python3 assemblyAvalancheStatistics.py "OVERLAP20" 0.01 10 False + diff --git a/simulation-bin/run_binary_paper2/activation/net_activation.out b/simulation-bin/run_binary_paper2/activation/net_activation.out new file mode 100755 index 0000000..1214ce3 Binary files /dev/null and b/simulation-bin/run_binary_paper2/activation/net_activation.out differ diff --git a/simulation-bin/run_binary_paper2/activation/runner b/simulation-bin/run_binary_paper2/activation/runner new file mode 100644 index 0000000..0336d90 --- /dev/null +++ b/simulation-bin/run_binary_paper2/activation/runner @@ -0,0 +1,121 @@ +#!/bin/sh + +# first copies data from learning/consolidation simulation and then runs simulation to investigate spontaneous activation + +# uses 'screen' to run process(es) in the background + +learned_dir="../organization/3rd" + +### 10 ### +cd OVERLAP10 +cp "../net_activation.out" . +cp "../../../../analysis/overlapParadigms.py" . +cp "../../../../analysis/utilityFunctions.py" . +cp "../../../../analysis/assemblyAvalancheStatistics.py" . +cp "../$learned_dir/OVERLAP10/connections.txt" . +cp "../$learned_dir/OVERLAP10/"*/*/*"_net_28810.0.txt" coupling_strengths.txt +screen -d -m /bin/sh run +cd "../OVERLAP10 no ABC" +cp "../net_activation.out" . +cp "../../../../analysis/overlapParadigms.py" . +cp "../../../../analysis/utilityFunctions.py" . +cp "../../../../analysis/assemblyAvalancheStatistics.py" . +cp "../$learned_dir/OVERLAP10 no ABC/connections.txt" . +cp "../$learned_dir/OVERLAP10 no ABC/"*/*/*"_net_28810.0.txt" coupling_strengths.txt +screen -d -m /bin/sh run +cd "../OVERLAP10 no AC, no ABC" +cp "../net_activation.out" . +cp "../../../../analysis/overlapParadigms.py" . +cp "../../../../analysis/utilityFunctions.py" . +cp "../../../../analysis/assemblyAvalancheStatistics.py" . +cp "../$learned_dir/OVERLAP10 no AC, no ABC/connections.txt" . +cp "../$learned_dir/OVERLAP10 no AC, no ABC/"*/*/*"_net_28810.0.txt" coupling_strengths.txt +screen -d -m /bin/sh run +cd "../OVERLAP10 no BC, no ABC" +cp "../net_activation.out" . +cp "../../../../analysis/overlapParadigms.py" . +cp "../../../../analysis/utilityFunctions.py" . +cp "../../../../analysis/assemblyAvalancheStatistics.py" . +cp "../$learned_dir/OVERLAP10 no BC, no ABC/connections.txt" . +cp "../$learned_dir/OVERLAP10 no BC, no ABC/"*/*/*"_net_28810.0.txt" coupling_strengths.txt +screen -d -m /bin/sh run + +### 15 ### +cd ../OVERLAP15 +cp "../net_activation.out" . +cp "../../../../analysis/overlapParadigms.py" . +cp "../../../../analysis/utilityFunctions.py" . +cp "../../../../analysis/assemblyAvalancheStatistics.py" . +cp "../$learned_dir/OVERLAP15/connections.txt" . +cp "../$learned_dir/OVERLAP15/"*/*/*"_net_28810.0.txt" coupling_strengths.txt +screen -d -m /bin/sh run +cd "../OVERLAP15 no ABC" +cp "../net_activation.out" . +cp "../../../../analysis/overlapParadigms.py" . +cp "../../../../analysis/utilityFunctions.py" . +cp "../../../../analysis/assemblyAvalancheStatistics.py" . +cp "../$learned_dir/OVERLAP15 no ABC/connections.txt" . +cp "../$learned_dir/OVERLAP15 no ABC/"*/*/*"_net_28810.0.txt" coupling_strengths.txt +screen -d -m /bin/sh run +cd "../OVERLAP15 no AC, no ABC" +cp "../net_activation.out" . +cp "../../../../analysis/overlapParadigms.py" . +cp "../../../../analysis/utilityFunctions.py" . +cp "../../../../analysis/assemblyAvalancheStatistics.py" . +cp "../$learned_dir/OVERLAP15 no AC, no ABC/connections.txt" . +cp "../$learned_dir/OVERLAP15 no AC, no ABC/"*/*/*"_net_28810.0.txt" coupling_strengths.txt +screen -d -m /bin/sh run +cd "../OVERLAP15 no BC, no ABC" +cp "../net_activation.out" . +cp "../../../../analysis/overlapParadigms.py" . +cp "../../../../analysis/utilityFunctions.py" . +cp "../../../../analysis/assemblyAvalancheStatistics.py" . +cp "../$learned_dir/OVERLAP15 no BC, no ABC/connections.txt" . +cp "../$learned_dir/OVERLAP15 no BC, no ABC/"*/*/*"_net_28810.0.txt" coupling_strengths.txt +screen -d -m /bin/sh run + +### 20 ### +cd "../OVERLAP20" +cp "../net_activation.out" . +cp "../../../../analysis/overlapParadigms.py" . +cp "../../../../analysis/utilityFunctions.py" . +cp "../../../../analysis/assemblyAvalancheStatistics.py" . +cp "../$learned_dir/OVERLAP20/connections.txt" . +cp "../$learned_dir/OVERLAP20/"*/*/*"_net_28810.0.txt" coupling_strengths.txt +screen -d -m /bin/sh run +cd "../OVERLAP20 no ABC" +cp "../net_activation.out" . +cp "../../../../analysis/overlapParadigms.py" . +cp "../../../../analysis/utilityFunctions.py" . +cp "../../../../analysis/assemblyAvalancheStatistics.py" . +cp "../$learned_dir/OVERLAP20 no ABC/connections.txt" . +cp "../$learned_dir/OVERLAP20 no ABC/"*/*/*"_net_28810.0.txt" coupling_strengths.txt +screen -d -m /bin/sh run +cd "../OVERLAP20 no AC, no ABC" +cp "../net_activation.out" . +cp "../../../../analysis/overlapParadigms.py" . +cp "../../../../analysis/utilityFunctions.py" . +cp "../../../../analysis/assemblyAvalancheStatistics.py" . +cp "../$learned_dir/OVERLAP20 no AC, no ABC/connections.txt" . +cp "../$learned_dir/OVERLAP20 no AC, no ABC/"*/*/*"_net_28810.0.txt" coupling_strengths.txt +screen -d -m /bin/sh run +cd "../OVERLAP20 no BC, no ABC" +cp "../net_activation.out" . +cp "../../../../analysis/overlapParadigms.py" . +cp "../../../../analysis/utilityFunctions.py" . +cp "../../../../analysis/assemblyAvalancheStatistics.py" . +cp "../$learned_dir/OVERLAP20 no BC, no ABC/connections.txt" . +cp "../$learned_dir/OVERLAP20 no BC, no ABC/"*/*/*"_net_28810.0.txt" coupling_strengths.txt +screen -d -m /bin/sh run + +### NO ### +cd ../NOOVERLAP +cp "../net_activation.out" . +cp "../../../../analysis/overlapParadigms.py" . +cp "../../../../analysis/utilityFunctions.py" . +cp "../../../../analysis/assemblyAvalancheStatistics.py" . +cp "../$learned_dir/NOOVERLAP/connections.txt" . +cp "../$learned_dir/NOOVERLAP/"*/*/*"_net_28810.0.txt" coupling_strengths.txt +screen -d -m /bin/sh run + +cd .. diff --git a/simulation-bin/run_binary_paper2/organization/1st/FIRST/net.out b/simulation-bin/run_binary_paper2/organization/1st/FIRST/net.out new file mode 100755 index 0000000..2f91f2c Binary files /dev/null and b/simulation-bin/run_binary_paper2/organization/1st/FIRST/net.out differ diff --git a/simulation-bin/run_binary_paper2/organization/1st/FIRST/run b/simulation-bin/run_binary_paper2/organization/1st/FIRST/run new file mode 100755 index 0000000..397a3d8 --- /dev/null +++ b/simulation-bin/run_binary_paper2/organization/1st/FIRST/run @@ -0,0 +1,8 @@ +#!/bin/sh + +# t = 0 ... 12, learning A +./net.out -Nl=50 -Nl_inh=25 -t_max=12 -N_stim=25 -pc=0.1 -zmax=1 -learn=TRIPLETat10.0 -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="1st CA" + +cd ../../2nd +/bin/sh runner + diff --git a/simulation-bin/run_binary_paper2/organization/2nd/NOOVERLAP/net.out b/simulation-bin/run_binary_paper2/organization/2nd/NOOVERLAP/net.out new file mode 100755 index 0000000..04f5825 Binary files /dev/null and b/simulation-bin/run_binary_paper2/organization/2nd/NOOVERLAP/net.out differ diff --git a/simulation-bin/run_binary_paper2/organization/2nd/NOOVERLAP/run b/simulation-bin/run_binary_paper2/organization/2nd/NOOVERLAP/run new file mode 100755 index 0000000..2eaceef --- /dev/null +++ b/simulation-bin/run_binary_paper2/organization/2nd/NOOVERLAP/run @@ -0,0 +1,8 @@ +#!/bin/sh + +# t = 12 ... 15, learning B +./net.out -Nl=50 -Nl_inh=25 -t_max=15 -N_stim=25 -pc=0.1 -zmax=1 -learn=TRIPLETat13.0 -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="2nd CA" + +cd ../../3rd +/bin/sh runner_NOOVERLAP + diff --git a/simulation-bin/run_binary_paper2/organization/2nd/OVERLAP10/net.out b/simulation-bin/run_binary_paper2/organization/2nd/OVERLAP10/net.out new file mode 100755 index 0000000..a1fc274 Binary files /dev/null and b/simulation-bin/run_binary_paper2/organization/2nd/OVERLAP10/net.out differ diff --git a/simulation-bin/run_binary_paper2/organization/2nd/OVERLAP10/run b/simulation-bin/run_binary_paper2/organization/2nd/OVERLAP10/run new file mode 100755 index 0000000..5e493f4 --- /dev/null +++ b/simulation-bin/run_binary_paper2/organization/2nd/OVERLAP10/run @@ -0,0 +1,8 @@ +#!/bin/sh + +# t = 12 ... 15, learning B +./net.out -Nl=50 -Nl_inh=25 -t_max=15 -N_stim=25 -pc=0.1 -zmax=1 -learn=TRIPLETat13.0 -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="2nd CA" + +cd ../../3rd +/bin/sh runner_OVERLAP10 + diff --git a/simulation-bin/run_binary_paper2/organization/2nd/OVERLAP15/net.out b/simulation-bin/run_binary_paper2/organization/2nd/OVERLAP15/net.out new file mode 100755 index 0000000..1d55b88 Binary files /dev/null and b/simulation-bin/run_binary_paper2/organization/2nd/OVERLAP15/net.out differ diff --git a/simulation-bin/run_binary_paper2/organization/2nd/OVERLAP15/run b/simulation-bin/run_binary_paper2/organization/2nd/OVERLAP15/run new file mode 100755 index 0000000..206b2e9 --- /dev/null +++ b/simulation-bin/run_binary_paper2/organization/2nd/OVERLAP15/run @@ -0,0 +1,8 @@ +#!/bin/sh + +# t = 12 ... 15, learning B +./net.out -Nl=50 -Nl_inh=25 -t_max=15 -N_stim=25 -pc=0.1 -zmax=1 -learn=TRIPLETat13.0 -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="2nd CA" + +cd ../../3rd +/bin/sh runner_OVERLAP15 + diff --git a/simulation-bin/run_binary_paper2/organization/2nd/OVERLAP20/net.out b/simulation-bin/run_binary_paper2/organization/2nd/OVERLAP20/net.out new file mode 100755 index 0000000..d86b7d1 Binary files /dev/null and b/simulation-bin/run_binary_paper2/organization/2nd/OVERLAP20/net.out differ diff --git a/simulation-bin/run_binary_paper2/organization/2nd/OVERLAP20/run b/simulation-bin/run_binary_paper2/organization/2nd/OVERLAP20/run new file mode 100755 index 0000000..baf0816 --- /dev/null +++ b/simulation-bin/run_binary_paper2/organization/2nd/OVERLAP20/run @@ -0,0 +1,8 @@ +#!/bin/sh + +# t = 12 ... 15, learning B +./net.out -Nl=50 -Nl_inh=25 -t_max=15 -N_stim=25 -pc=0.1 -zmax=1 -learn=TRIPLETat13.0 -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="2nd CA" + +cd ../../3rd +/bin/sh runner_OVERLAP20 + diff --git a/simulation-bin/run_binary_paper2/organization/2nd/runner b/simulation-bin/run_binary_paper2/organization/2nd/runner new file mode 100644 index 0000000..b8748ee --- /dev/null +++ b/simulation-bin/run_binary_paper2/organization/2nd/runner @@ -0,0 +1,28 @@ +#!/bin/sh +learned_dir="../1st/FIRST" + +### 10 ### +cd OVERLAP10 +cp "../$learned_dir/"*/*"_connections.txt" connections.txt +cp "../$learned_dir/"*"/saved_state.txt" saved_state.txt +screen -d -m /bin/sh run + +### 15 ### +cd ../OVERLAP15 +cp "../$learned_dir/"*/*"_connections.txt" connections.txt +cp "../$learned_dir/"*"/saved_state.txt" saved_state.txt +screen -d -m /bin/sh run + +### 20 ### +cd "../OVERLAP20" +cp "../$learned_dir/"*/*"_connections.txt" connections.txt +cp "../$learned_dir/"*"/saved_state.txt" saved_state.txt +screen -d -m /bin/sh run + +### NO ### +cd ../NOOVERLAP +cp "../$learned_dir/"*/*"_connections.txt" connections.txt +cp "../$learned_dir/"*"/saved_state.txt" saved_state.txt +screen -d -m /bin/sh run + +cd .. diff --git a/simulation-bin/run_binary_paper2/organization/3rd/NOOVERLAP/net.out b/simulation-bin/run_binary_paper2/organization/3rd/NOOVERLAP/net.out new file mode 100755 index 0000000..0d76dca Binary files /dev/null and b/simulation-bin/run_binary_paper2/organization/3rd/NOOVERLAP/net.out differ diff --git a/simulation-bin/run_binary_paper2/organization/3rd/NOOVERLAP/run b/simulation-bin/run_binary_paper2/organization/3rd/NOOVERLAP/run new file mode 100755 index 0000000..8224290 --- /dev/null +++ b/simulation-bin/run_binary_paper2/organization/3rd/NOOVERLAP/run @@ -0,0 +1,5 @@ +#!/bin/sh + +# t = 15 ... 28810, learning C and consolidating +./net.out -Nl=50 -Nl_inh=25 -t_max=28812 -N_stim=25 -pc=0.1 -zmax=1 -learn=TRIPLETat16.0 -recall=F100D1at28810.0 -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="3rd CA" + diff --git a/simulation-bin/run_binary_paper2/organization/3rd/OVERLAP10 no ABC/net.out b/simulation-bin/run_binary_paper2/organization/3rd/OVERLAP10 no ABC/net.out new file mode 100755 index 0000000..6d7c785 Binary files /dev/null and b/simulation-bin/run_binary_paper2/organization/3rd/OVERLAP10 no ABC/net.out differ diff --git a/simulation-bin/run_binary_paper2/organization/3rd/OVERLAP10 no ABC/run b/simulation-bin/run_binary_paper2/organization/3rd/OVERLAP10 no ABC/run new file mode 100755 index 0000000..8224290 --- /dev/null +++ b/simulation-bin/run_binary_paper2/organization/3rd/OVERLAP10 no ABC/run @@ -0,0 +1,5 @@ +#!/bin/sh + +# t = 15 ... 28810, learning C and consolidating +./net.out -Nl=50 -Nl_inh=25 -t_max=28812 -N_stim=25 -pc=0.1 -zmax=1 -learn=TRIPLETat16.0 -recall=F100D1at28810.0 -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="3rd CA" + diff --git a/simulation-bin/run_binary_paper2/organization/3rd/OVERLAP10 no AC, no ABC/net.out b/simulation-bin/run_binary_paper2/organization/3rd/OVERLAP10 no AC, no ABC/net.out new file mode 100755 index 0000000..ca00e40 Binary files /dev/null and b/simulation-bin/run_binary_paper2/organization/3rd/OVERLAP10 no AC, no ABC/net.out differ diff --git a/simulation-bin/run_binary_paper2/organization/3rd/OVERLAP10 no AC, no ABC/run b/simulation-bin/run_binary_paper2/organization/3rd/OVERLAP10 no AC, no ABC/run new file mode 100755 index 0000000..8224290 --- /dev/null +++ b/simulation-bin/run_binary_paper2/organization/3rd/OVERLAP10 no AC, no ABC/run @@ -0,0 +1,5 @@ +#!/bin/sh + +# t = 15 ... 28810, learning C and consolidating +./net.out -Nl=50 -Nl_inh=25 -t_max=28812 -N_stim=25 -pc=0.1 -zmax=1 -learn=TRIPLETat16.0 -recall=F100D1at28810.0 -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="3rd CA" + diff --git a/simulation-bin/run_binary_paper2/organization/3rd/OVERLAP10 no BC, no ABC/net.out b/simulation-bin/run_binary_paper2/organization/3rd/OVERLAP10 no BC, no ABC/net.out new file mode 100755 index 0000000..03ab8c1 Binary files /dev/null and b/simulation-bin/run_binary_paper2/organization/3rd/OVERLAP10 no BC, no ABC/net.out differ diff --git a/simulation-bin/run_binary_paper2/organization/3rd/OVERLAP10 no BC, no ABC/run b/simulation-bin/run_binary_paper2/organization/3rd/OVERLAP10 no BC, no ABC/run new file mode 100755 index 0000000..8224290 --- /dev/null +++ b/simulation-bin/run_binary_paper2/organization/3rd/OVERLAP10 no BC, no ABC/run @@ -0,0 +1,5 @@ +#!/bin/sh + +# t = 15 ... 28810, learning C and consolidating +./net.out -Nl=50 -Nl_inh=25 -t_max=28812 -N_stim=25 -pc=0.1 -zmax=1 -learn=TRIPLETat16.0 -recall=F100D1at28810.0 -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="3rd CA" + diff --git a/simulation-bin/run_binary_paper2/organization/3rd/OVERLAP10/net.out b/simulation-bin/run_binary_paper2/organization/3rd/OVERLAP10/net.out new file mode 100755 index 0000000..8e965b0 Binary files /dev/null and b/simulation-bin/run_binary_paper2/organization/3rd/OVERLAP10/net.out differ diff --git a/simulation-bin/run_binary_paper2/organization/3rd/OVERLAP10/run b/simulation-bin/run_binary_paper2/organization/3rd/OVERLAP10/run new file mode 100755 index 0000000..8224290 --- /dev/null +++ b/simulation-bin/run_binary_paper2/organization/3rd/OVERLAP10/run @@ -0,0 +1,5 @@ +#!/bin/sh + +# t = 15 ... 28810, learning C and consolidating +./net.out -Nl=50 -Nl_inh=25 -t_max=28812 -N_stim=25 -pc=0.1 -zmax=1 -learn=TRIPLETat16.0 -recall=F100D1at28810.0 -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="3rd CA" + diff --git a/simulation-bin/run_binary_paper2/organization/3rd/OVERLAP15 no ABC/net.out b/simulation-bin/run_binary_paper2/organization/3rd/OVERLAP15 no ABC/net.out new file mode 100755 index 0000000..5c6b537 Binary files /dev/null and b/simulation-bin/run_binary_paper2/organization/3rd/OVERLAP15 no ABC/net.out differ diff --git a/simulation-bin/run_binary_paper2/organization/3rd/OVERLAP15 no ABC/run b/simulation-bin/run_binary_paper2/organization/3rd/OVERLAP15 no ABC/run new file mode 100755 index 0000000..8224290 --- /dev/null +++ b/simulation-bin/run_binary_paper2/organization/3rd/OVERLAP15 no ABC/run @@ -0,0 +1,5 @@ +#!/bin/sh + +# t = 15 ... 28810, learning C and consolidating +./net.out -Nl=50 -Nl_inh=25 -t_max=28812 -N_stim=25 -pc=0.1 -zmax=1 -learn=TRIPLETat16.0 -recall=F100D1at28810.0 -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="3rd CA" + diff --git a/simulation-bin/run_binary_paper2/organization/3rd/OVERLAP15 no AC, no ABC/net.out b/simulation-bin/run_binary_paper2/organization/3rd/OVERLAP15 no AC, no ABC/net.out new file mode 100755 index 0000000..7ac2539 Binary files /dev/null and b/simulation-bin/run_binary_paper2/organization/3rd/OVERLAP15 no AC, no ABC/net.out differ diff --git a/simulation-bin/run_binary_paper2/organization/3rd/OVERLAP15 no AC, no ABC/run b/simulation-bin/run_binary_paper2/organization/3rd/OVERLAP15 no AC, no ABC/run new file mode 100755 index 0000000..8224290 --- /dev/null +++ b/simulation-bin/run_binary_paper2/organization/3rd/OVERLAP15 no AC, no ABC/run @@ -0,0 +1,5 @@ +#!/bin/sh + +# t = 15 ... 28810, learning C and consolidating +./net.out -Nl=50 -Nl_inh=25 -t_max=28812 -N_stim=25 -pc=0.1 -zmax=1 -learn=TRIPLETat16.0 -recall=F100D1at28810.0 -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="3rd CA" + diff --git a/simulation-bin/run_binary_paper2/organization/3rd/OVERLAP15 no BC, no ABC/net.out b/simulation-bin/run_binary_paper2/organization/3rd/OVERLAP15 no BC, no ABC/net.out new file mode 100755 index 0000000..0b4f8ac Binary files /dev/null and b/simulation-bin/run_binary_paper2/organization/3rd/OVERLAP15 no BC, no ABC/net.out differ diff --git a/simulation-bin/run_binary_paper2/organization/3rd/OVERLAP15 no BC, no ABC/run b/simulation-bin/run_binary_paper2/organization/3rd/OVERLAP15 no BC, no ABC/run new file mode 100755 index 0000000..8224290 --- /dev/null +++ b/simulation-bin/run_binary_paper2/organization/3rd/OVERLAP15 no BC, no ABC/run @@ -0,0 +1,5 @@ +#!/bin/sh + +# t = 15 ... 28810, learning C and consolidating +./net.out -Nl=50 -Nl_inh=25 -t_max=28812 -N_stim=25 -pc=0.1 -zmax=1 -learn=TRIPLETat16.0 -recall=F100D1at28810.0 -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="3rd CA" + diff --git a/simulation-bin/run_binary_paper2/organization/3rd/OVERLAP15/net.out b/simulation-bin/run_binary_paper2/organization/3rd/OVERLAP15/net.out new file mode 100755 index 0000000..77446d5 Binary files /dev/null and b/simulation-bin/run_binary_paper2/organization/3rd/OVERLAP15/net.out differ diff --git a/simulation-bin/run_binary_paper2/organization/3rd/OVERLAP15/run b/simulation-bin/run_binary_paper2/organization/3rd/OVERLAP15/run new file mode 100755 index 0000000..8224290 --- /dev/null +++ b/simulation-bin/run_binary_paper2/organization/3rd/OVERLAP15/run @@ -0,0 +1,5 @@ +#!/bin/sh + +# t = 15 ... 28810, learning C and consolidating +./net.out -Nl=50 -Nl_inh=25 -t_max=28812 -N_stim=25 -pc=0.1 -zmax=1 -learn=TRIPLETat16.0 -recall=F100D1at28810.0 -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="3rd CA" + diff --git a/simulation-bin/run_binary_paper2/organization/3rd/OVERLAP20 no ABC/net.out b/simulation-bin/run_binary_paper2/organization/3rd/OVERLAP20 no ABC/net.out new file mode 100755 index 0000000..0dce3ff Binary files /dev/null and b/simulation-bin/run_binary_paper2/organization/3rd/OVERLAP20 no ABC/net.out differ diff --git a/simulation-bin/run_binary_paper2/organization/3rd/OVERLAP20 no ABC/run b/simulation-bin/run_binary_paper2/organization/3rd/OVERLAP20 no ABC/run new file mode 100755 index 0000000..8224290 --- /dev/null +++ b/simulation-bin/run_binary_paper2/organization/3rd/OVERLAP20 no ABC/run @@ -0,0 +1,5 @@ +#!/bin/sh + +# t = 15 ... 28810, learning C and consolidating +./net.out -Nl=50 -Nl_inh=25 -t_max=28812 -N_stim=25 -pc=0.1 -zmax=1 -learn=TRIPLETat16.0 -recall=F100D1at28810.0 -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="3rd CA" + diff --git a/simulation-bin/run_binary_paper2/organization/3rd/OVERLAP20 no AC, no ABC/net.out b/simulation-bin/run_binary_paper2/organization/3rd/OVERLAP20 no AC, no ABC/net.out new file mode 100755 index 0000000..c8ef658 Binary files /dev/null and b/simulation-bin/run_binary_paper2/organization/3rd/OVERLAP20 no AC, no ABC/net.out differ diff --git a/simulation-bin/run_binary_paper2/organization/3rd/OVERLAP20 no AC, no ABC/run b/simulation-bin/run_binary_paper2/organization/3rd/OVERLAP20 no AC, no ABC/run new file mode 100755 index 0000000..8224290 --- /dev/null +++ b/simulation-bin/run_binary_paper2/organization/3rd/OVERLAP20 no AC, no ABC/run @@ -0,0 +1,5 @@ +#!/bin/sh + +# t = 15 ... 28810, learning C and consolidating +./net.out -Nl=50 -Nl_inh=25 -t_max=28812 -N_stim=25 -pc=0.1 -zmax=1 -learn=TRIPLETat16.0 -recall=F100D1at28810.0 -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="3rd CA" + diff --git a/simulation-bin/run_binary_paper2/organization/3rd/OVERLAP20 no BC, no ABC/net.out b/simulation-bin/run_binary_paper2/organization/3rd/OVERLAP20 no BC, no ABC/net.out new file mode 100755 index 0000000..f1218b0 Binary files /dev/null and b/simulation-bin/run_binary_paper2/organization/3rd/OVERLAP20 no BC, no ABC/net.out differ diff --git a/simulation-bin/run_binary_paper2/organization/3rd/OVERLAP20 no BC, no ABC/run b/simulation-bin/run_binary_paper2/organization/3rd/OVERLAP20 no BC, no ABC/run new file mode 100755 index 0000000..8224290 --- /dev/null +++ b/simulation-bin/run_binary_paper2/organization/3rd/OVERLAP20 no BC, no ABC/run @@ -0,0 +1,5 @@ +#!/bin/sh + +# t = 15 ... 28810, learning C and consolidating +./net.out -Nl=50 -Nl_inh=25 -t_max=28812 -N_stim=25 -pc=0.1 -zmax=1 -learn=TRIPLETat16.0 -recall=F100D1at28810.0 -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="3rd CA" + diff --git a/simulation-bin/run_binary_paper2/organization/3rd/OVERLAP20/net.out b/simulation-bin/run_binary_paper2/organization/3rd/OVERLAP20/net.out new file mode 100755 index 0000000..b42eea3 Binary files /dev/null and b/simulation-bin/run_binary_paper2/organization/3rd/OVERLAP20/net.out differ diff --git a/simulation-bin/run_binary_paper2/organization/3rd/OVERLAP20/run b/simulation-bin/run_binary_paper2/organization/3rd/OVERLAP20/run new file mode 100755 index 0000000..8224290 --- /dev/null +++ b/simulation-bin/run_binary_paper2/organization/3rd/OVERLAP20/run @@ -0,0 +1,5 @@ +#!/bin/sh + +# t = 15 ... 28810, learning C and consolidating +./net.out -Nl=50 -Nl_inh=25 -t_max=28812 -N_stim=25 -pc=0.1 -zmax=1 -learn=TRIPLETat16.0 -recall=F100D1at28810.0 -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="3rd CA" + diff --git a/simulation-bin/run_binary_paper2/organization/3rd/runner_NOOVERLAP b/simulation-bin/run_binary_paper2/organization/3rd/runner_NOOVERLAP new file mode 100644 index 0000000..070aa10 --- /dev/null +++ b/simulation-bin/run_binary_paper2/organization/3rd/runner_NOOVERLAP @@ -0,0 +1,10 @@ +#!/bin/sh +learned_dir="../2nd" + +### NO ### +cd NOOVERLAP +cp "../$learned_dir/NOOVERLAP/"*/*"_connections.txt" connections.txt +cp "../$learned_dir/NOOVERLAP/"*"/saved_state.txt" saved_state.txt +screen -d -m /bin/sh run + +cd .. diff --git a/simulation-bin/run_binary_paper2/organization/3rd/runner_OVERLAP10 b/simulation-bin/run_binary_paper2/organization/3rd/runner_OVERLAP10 new file mode 100644 index 0000000..b3f9c99 --- /dev/null +++ b/simulation-bin/run_binary_paper2/organization/3rd/runner_OVERLAP10 @@ -0,0 +1,22 @@ +#!/bin/sh +learned_dir="../2nd" + +### 10 ### +cd OVERLAP10 +cp "../$learned_dir/OVERLAP10/"*/*"_connections.txt" connections.txt +cp "../$learned_dir/OVERLAP10/"*"/saved_state.txt" saved_state.txt +screen -d -m /bin/sh run +cd "../OVERLAP10 no ABC" +cp "../$learned_dir/OVERLAP10/"*/*"_connections.txt" connections.txt +cp "../$learned_dir/OVERLAP10/"*"/saved_state.txt" saved_state.txt +screen -d -m /bin/sh run +cd "../OVERLAP10 no AC, no ABC" +cp "../$learned_dir/OVERLAP10/"*/*"_connections.txt" connections.txt +cp "../$learned_dir/OVERLAP10/"*"/saved_state.txt" saved_state.txt +screen -d -m /bin/sh run +cd "../OVERLAP10 no BC, no ABC" +cp "../$learned_dir/OVERLAP10/"*/*"_connections.txt" connections.txt +cp "../$learned_dir/OVERLAP10/"*"/saved_state.txt" saved_state.txt +screen -d -m /bin/sh run + +cd .. diff --git a/simulation-bin/run_binary_paper2/organization/3rd/runner_OVERLAP15 b/simulation-bin/run_binary_paper2/organization/3rd/runner_OVERLAP15 new file mode 100644 index 0000000..e8df3ba --- /dev/null +++ b/simulation-bin/run_binary_paper2/organization/3rd/runner_OVERLAP15 @@ -0,0 +1,22 @@ +#!/bin/sh +learned_dir="../2nd" + +### 10 ### +cd OVERLAP15 +cp "../$learned_dir/OVERLAP15/"*/*"_connections.txt" connections.txt +cp "../$learned_dir/OVERLAP15/"*"/saved_state.txt" saved_state.txt +screen -d -m /bin/sh run +cd "../OVERLAP15 no ABC" +cp "../$learned_dir/OVERLAP15/"*/*"_connections.txt" connections.txt +cp "../$learned_dir/OVERLAP15/"*"/saved_state.txt" saved_state.txt +screen -d -m /bin/sh run +cd "../OVERLAP15 no AC, no ABC" +cp "../$learned_dir/OVERLAP15/"*/*"_connections.txt" connections.txt +cp "../$learned_dir/OVERLAP15/"*"/saved_state.txt" saved_state.txt +screen -d -m /bin/sh run +cd "../OVERLAP15 no BC, no ABC" +cp "../$learned_dir/OVERLAP15/"*/*"_connections.txt" connections.txt +cp "../$learned_dir/OVERLAP15/"*"/saved_state.txt" saved_state.txt +screen -d -m /bin/sh run + +cd .. diff --git a/simulation-bin/run_binary_paper2/organization/3rd/runner_OVERLAP20 b/simulation-bin/run_binary_paper2/organization/3rd/runner_OVERLAP20 new file mode 100644 index 0000000..9b1c4c3 --- /dev/null +++ b/simulation-bin/run_binary_paper2/organization/3rd/runner_OVERLAP20 @@ -0,0 +1,22 @@ +#!/bin/sh +learned_dir="../2nd" + +### 10 ### +cd OVERLAP20 +cp "../$learned_dir/OVERLAP20/"*/*"_connections.txt" connections.txt +cp "../$learned_dir/OVERLAP20/"*"/saved_state.txt" saved_state.txt +screen -d -m /bin/sh run +cd "../OVERLAP20 no ABC" +cp "../$learned_dir/OVERLAP20/"*/*"_connections.txt" connections.txt +cp "../$learned_dir/OVERLAP20/"*"/saved_state.txt" saved_state.txt +screen -d -m /bin/sh run +cd "../OVERLAP20 no AC, no ABC" +cp "../$learned_dir/OVERLAP20/"*/*"_connections.txt" connections.txt +cp "../$learned_dir/OVERLAP20/"*"/saved_state.txt" saved_state.txt +screen -d -m /bin/sh run +cd "../OVERLAP20 no BC, no ABC" +cp "../$learned_dir/OVERLAP20/"*/*"_connections.txt" connections.txt +cp "../$learned_dir/OVERLAP20/"*"/saved_state.txt" saved_state.txt +screen -d -m /bin/sh run + +cd .. diff --git a/simulation-bin/run_binary_paper2/organization_noLTD/1st/FIRST/net.out b/simulation-bin/run_binary_paper2/organization_noLTD/1st/FIRST/net.out new file mode 100755 index 0000000..2cb2eb5 Binary files /dev/null and b/simulation-bin/run_binary_paper2/organization_noLTD/1st/FIRST/net.out differ diff --git a/simulation-bin/run_binary_paper2/organization_noLTD/1st/FIRST/run b/simulation-bin/run_binary_paper2/organization_noLTD/1st/FIRST/run new file mode 100755 index 0000000..397a3d8 --- /dev/null +++ b/simulation-bin/run_binary_paper2/organization_noLTD/1st/FIRST/run @@ -0,0 +1,8 @@ +#!/bin/sh + +# t = 0 ... 12, learning A +./net.out -Nl=50 -Nl_inh=25 -t_max=12 -N_stim=25 -pc=0.1 -zmax=1 -learn=TRIPLETat10.0 -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="1st CA" + +cd ../../2nd +/bin/sh runner + diff --git a/simulation-bin/run_binary_paper2/organization_noLTD/2nd/NOOVERLAP/net.out b/simulation-bin/run_binary_paper2/organization_noLTD/2nd/NOOVERLAP/net.out new file mode 100755 index 0000000..bac849c Binary files /dev/null and b/simulation-bin/run_binary_paper2/organization_noLTD/2nd/NOOVERLAP/net.out differ diff --git a/simulation-bin/run_binary_paper2/organization_noLTD/2nd/NOOVERLAP/run b/simulation-bin/run_binary_paper2/organization_noLTD/2nd/NOOVERLAP/run new file mode 100755 index 0000000..2eaceef --- /dev/null +++ b/simulation-bin/run_binary_paper2/organization_noLTD/2nd/NOOVERLAP/run @@ -0,0 +1,8 @@ +#!/bin/sh + +# t = 12 ... 15, learning B +./net.out -Nl=50 -Nl_inh=25 -t_max=15 -N_stim=25 -pc=0.1 -zmax=1 -learn=TRIPLETat13.0 -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="2nd CA" + +cd ../../3rd +/bin/sh runner_NOOVERLAP + diff --git a/simulation-bin/net_irs.out b/simulation-bin/run_binary_paper2/organization_noLTD/2nd/OVERLAP10/net.out similarity index 58% rename from simulation-bin/net_irs.out rename to simulation-bin/run_binary_paper2/organization_noLTD/2nd/OVERLAP10/net.out index c581ee3..45e8a82 100755 Binary files a/simulation-bin/net_irs.out and b/simulation-bin/run_binary_paper2/organization_noLTD/2nd/OVERLAP10/net.out differ diff --git a/simulation-bin/run_binary_paper2/organization_noLTD/2nd/OVERLAP10/run b/simulation-bin/run_binary_paper2/organization_noLTD/2nd/OVERLAP10/run new file mode 100755 index 0000000..5e493f4 --- /dev/null +++ b/simulation-bin/run_binary_paper2/organization_noLTD/2nd/OVERLAP10/run @@ -0,0 +1,8 @@ +#!/bin/sh + +# t = 12 ... 15, learning B +./net.out -Nl=50 -Nl_inh=25 -t_max=15 -N_stim=25 -pc=0.1 -zmax=1 -learn=TRIPLETat13.0 -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="2nd CA" + +cd ../../3rd +/bin/sh runner_OVERLAP10 + diff --git a/simulation-bin/run_binary_paper2/organization_noLTD/2nd/OVERLAP15/net.out b/simulation-bin/run_binary_paper2/organization_noLTD/2nd/OVERLAP15/net.out new file mode 100755 index 0000000..4ac99c7 Binary files /dev/null and b/simulation-bin/run_binary_paper2/organization_noLTD/2nd/OVERLAP15/net.out differ diff --git a/simulation-bin/run_binary_paper2/organization_noLTD/2nd/OVERLAP15/run b/simulation-bin/run_binary_paper2/organization_noLTD/2nd/OVERLAP15/run new file mode 100755 index 0000000..206b2e9 --- /dev/null +++ b/simulation-bin/run_binary_paper2/organization_noLTD/2nd/OVERLAP15/run @@ -0,0 +1,8 @@ +#!/bin/sh + +# t = 12 ... 15, learning B +./net.out -Nl=50 -Nl_inh=25 -t_max=15 -N_stim=25 -pc=0.1 -zmax=1 -learn=TRIPLETat13.0 -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="2nd CA" + +cd ../../3rd +/bin/sh runner_OVERLAP15 + diff --git a/simulation-bin/run_binary_paper2/organization_noLTD/2nd/OVERLAP20/net.out b/simulation-bin/run_binary_paper2/organization_noLTD/2nd/OVERLAP20/net.out new file mode 100755 index 0000000..45a2710 Binary files /dev/null and b/simulation-bin/run_binary_paper2/organization_noLTD/2nd/OVERLAP20/net.out differ diff --git a/simulation-bin/run_binary_paper2/organization_noLTD/2nd/OVERLAP20/run b/simulation-bin/run_binary_paper2/organization_noLTD/2nd/OVERLAP20/run new file mode 100755 index 0000000..baf0816 --- /dev/null +++ b/simulation-bin/run_binary_paper2/organization_noLTD/2nd/OVERLAP20/run @@ -0,0 +1,8 @@ +#!/bin/sh + +# t = 12 ... 15, learning B +./net.out -Nl=50 -Nl_inh=25 -t_max=15 -N_stim=25 -pc=0.1 -zmax=1 -learn=TRIPLETat13.0 -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="2nd CA" + +cd ../../3rd +/bin/sh runner_OVERLAP20 + diff --git a/simulation-bin/run_binary_paper2/organization_noLTD/2nd/runner b/simulation-bin/run_binary_paper2/organization_noLTD/2nd/runner new file mode 100644 index 0000000..b8748ee --- /dev/null +++ b/simulation-bin/run_binary_paper2/organization_noLTD/2nd/runner @@ -0,0 +1,28 @@ +#!/bin/sh +learned_dir="../1st/FIRST" + +### 10 ### +cd OVERLAP10 +cp "../$learned_dir/"*/*"_connections.txt" connections.txt +cp "../$learned_dir/"*"/saved_state.txt" saved_state.txt +screen -d -m /bin/sh run + +### 15 ### +cd ../OVERLAP15 +cp "../$learned_dir/"*/*"_connections.txt" connections.txt +cp "../$learned_dir/"*"/saved_state.txt" saved_state.txt +screen -d -m /bin/sh run + +### 20 ### +cd "../OVERLAP20" +cp "../$learned_dir/"*/*"_connections.txt" connections.txt +cp "../$learned_dir/"*"/saved_state.txt" saved_state.txt +screen -d -m /bin/sh run + +### NO ### +cd ../NOOVERLAP +cp "../$learned_dir/"*/*"_connections.txt" connections.txt +cp "../$learned_dir/"*"/saved_state.txt" saved_state.txt +screen -d -m /bin/sh run + +cd .. diff --git a/simulation-bin/run_binary_paper2/organization_noLTD/3rd/NOOVERLAP/net.out b/simulation-bin/run_binary_paper2/organization_noLTD/3rd/NOOVERLAP/net.out new file mode 100755 index 0000000..3e06c08 Binary files /dev/null and b/simulation-bin/run_binary_paper2/organization_noLTD/3rd/NOOVERLAP/net.out differ diff --git a/simulation-bin/run_binary_paper2/organization_noLTD/3rd/NOOVERLAP/run b/simulation-bin/run_binary_paper2/organization_noLTD/3rd/NOOVERLAP/run new file mode 100755 index 0000000..8224290 --- /dev/null +++ b/simulation-bin/run_binary_paper2/organization_noLTD/3rd/NOOVERLAP/run @@ -0,0 +1,5 @@ +#!/bin/sh + +# t = 15 ... 28810, learning C and consolidating +./net.out -Nl=50 -Nl_inh=25 -t_max=28812 -N_stim=25 -pc=0.1 -zmax=1 -learn=TRIPLETat16.0 -recall=F100D1at28810.0 -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="3rd CA" + diff --git a/simulation-bin/run_binary_paper2/organization_noLTD/3rd/OVERLAP10 no ABC/net.out b/simulation-bin/run_binary_paper2/organization_noLTD/3rd/OVERLAP10 no ABC/net.out new file mode 100755 index 0000000..704c4fd Binary files /dev/null and b/simulation-bin/run_binary_paper2/organization_noLTD/3rd/OVERLAP10 no ABC/net.out differ diff --git a/simulation-bin/run_binary_paper2/organization_noLTD/3rd/OVERLAP10 no ABC/run b/simulation-bin/run_binary_paper2/organization_noLTD/3rd/OVERLAP10 no ABC/run new file mode 100755 index 0000000..8224290 --- /dev/null +++ b/simulation-bin/run_binary_paper2/organization_noLTD/3rd/OVERLAP10 no ABC/run @@ -0,0 +1,5 @@ +#!/bin/sh + +# t = 15 ... 28810, learning C and consolidating +./net.out -Nl=50 -Nl_inh=25 -t_max=28812 -N_stim=25 -pc=0.1 -zmax=1 -learn=TRIPLETat16.0 -recall=F100D1at28810.0 -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="3rd CA" + diff --git a/simulation-bin/run_binary_paper2/organization_noLTD/3rd/OVERLAP10 no AC, no ABC/net.out b/simulation-bin/run_binary_paper2/organization_noLTD/3rd/OVERLAP10 no AC, no ABC/net.out new file mode 100755 index 0000000..a99482f Binary files /dev/null and b/simulation-bin/run_binary_paper2/organization_noLTD/3rd/OVERLAP10 no AC, no ABC/net.out differ diff --git a/simulation-bin/run_binary_paper2/organization_noLTD/3rd/OVERLAP10 no AC, no ABC/run b/simulation-bin/run_binary_paper2/organization_noLTD/3rd/OVERLAP10 no AC, no ABC/run new file mode 100755 index 0000000..8224290 --- /dev/null +++ b/simulation-bin/run_binary_paper2/organization_noLTD/3rd/OVERLAP10 no AC, no ABC/run @@ -0,0 +1,5 @@ +#!/bin/sh + +# t = 15 ... 28810, learning C and consolidating +./net.out -Nl=50 -Nl_inh=25 -t_max=28812 -N_stim=25 -pc=0.1 -zmax=1 -learn=TRIPLETat16.0 -recall=F100D1at28810.0 -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="3rd CA" + diff --git a/simulation-bin/run_binary_paper2/organization_noLTD/3rd/OVERLAP10 no BC, no ABC/net.out b/simulation-bin/run_binary_paper2/organization_noLTD/3rd/OVERLAP10 no BC, no ABC/net.out new file mode 100755 index 0000000..16da0e6 Binary files /dev/null and b/simulation-bin/run_binary_paper2/organization_noLTD/3rd/OVERLAP10 no BC, no ABC/net.out differ diff --git a/simulation-bin/run_binary_paper2/organization_noLTD/3rd/OVERLAP10 no BC, no ABC/run b/simulation-bin/run_binary_paper2/organization_noLTD/3rd/OVERLAP10 no BC, no ABC/run new file mode 100755 index 0000000..8224290 --- /dev/null +++ b/simulation-bin/run_binary_paper2/organization_noLTD/3rd/OVERLAP10 no BC, no ABC/run @@ -0,0 +1,5 @@ +#!/bin/sh + +# t = 15 ... 28810, learning C and consolidating +./net.out -Nl=50 -Nl_inh=25 -t_max=28812 -N_stim=25 -pc=0.1 -zmax=1 -learn=TRIPLETat16.0 -recall=F100D1at28810.0 -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="3rd CA" + diff --git a/simulation-bin/run_binary_paper2/organization_noLTD/3rd/OVERLAP10/net.out b/simulation-bin/run_binary_paper2/organization_noLTD/3rd/OVERLAP10/net.out new file mode 100755 index 0000000..ec1704d Binary files /dev/null and b/simulation-bin/run_binary_paper2/organization_noLTD/3rd/OVERLAP10/net.out differ diff --git a/simulation-bin/run_binary_paper2/organization_noLTD/3rd/OVERLAP10/run b/simulation-bin/run_binary_paper2/organization_noLTD/3rd/OVERLAP10/run new file mode 100755 index 0000000..8224290 --- /dev/null +++ b/simulation-bin/run_binary_paper2/organization_noLTD/3rd/OVERLAP10/run @@ -0,0 +1,5 @@ +#!/bin/sh + +# t = 15 ... 28810, learning C and consolidating +./net.out -Nl=50 -Nl_inh=25 -t_max=28812 -N_stim=25 -pc=0.1 -zmax=1 -learn=TRIPLETat16.0 -recall=F100D1at28810.0 -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="3rd CA" + diff --git a/simulation-bin/run_binary_paper2/organization_noLTD/3rd/OVERLAP15 no ABC/net.out b/simulation-bin/run_binary_paper2/organization_noLTD/3rd/OVERLAP15 no ABC/net.out new file mode 100755 index 0000000..21e0b1c Binary files /dev/null and b/simulation-bin/run_binary_paper2/organization_noLTD/3rd/OVERLAP15 no ABC/net.out differ diff --git a/simulation-bin/run_binary_paper2/organization_noLTD/3rd/OVERLAP15 no ABC/run b/simulation-bin/run_binary_paper2/organization_noLTD/3rd/OVERLAP15 no ABC/run new file mode 100755 index 0000000..8224290 --- /dev/null +++ b/simulation-bin/run_binary_paper2/organization_noLTD/3rd/OVERLAP15 no ABC/run @@ -0,0 +1,5 @@ +#!/bin/sh + +# t = 15 ... 28810, learning C and consolidating +./net.out -Nl=50 -Nl_inh=25 -t_max=28812 -N_stim=25 -pc=0.1 -zmax=1 -learn=TRIPLETat16.0 -recall=F100D1at28810.0 -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="3rd CA" + diff --git a/simulation-bin/run_binary_paper2/organization_noLTD/3rd/OVERLAP15 no AC, no ABC/net.out b/simulation-bin/run_binary_paper2/organization_noLTD/3rd/OVERLAP15 no AC, no ABC/net.out new file mode 100755 index 0000000..d2442a8 Binary files /dev/null and b/simulation-bin/run_binary_paper2/organization_noLTD/3rd/OVERLAP15 no AC, no ABC/net.out differ diff --git a/simulation-bin/run_binary_paper2/organization_noLTD/3rd/OVERLAP15 no AC, no ABC/run b/simulation-bin/run_binary_paper2/organization_noLTD/3rd/OVERLAP15 no AC, no ABC/run new file mode 100755 index 0000000..8224290 --- /dev/null +++ b/simulation-bin/run_binary_paper2/organization_noLTD/3rd/OVERLAP15 no AC, no ABC/run @@ -0,0 +1,5 @@ +#!/bin/sh + +# t = 15 ... 28810, learning C and consolidating +./net.out -Nl=50 -Nl_inh=25 -t_max=28812 -N_stim=25 -pc=0.1 -zmax=1 -learn=TRIPLETat16.0 -recall=F100D1at28810.0 -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="3rd CA" + diff --git a/simulation-bin/run_binary_paper2/organization_noLTD/3rd/OVERLAP15 no BC, no ABC/net.out b/simulation-bin/run_binary_paper2/organization_noLTD/3rd/OVERLAP15 no BC, no ABC/net.out new file mode 100755 index 0000000..6a4bae9 Binary files /dev/null and b/simulation-bin/run_binary_paper2/organization_noLTD/3rd/OVERLAP15 no BC, no ABC/net.out differ diff --git a/simulation-bin/run_binary_paper2/organization_noLTD/3rd/OVERLAP15 no BC, no ABC/run b/simulation-bin/run_binary_paper2/organization_noLTD/3rd/OVERLAP15 no BC, no ABC/run new file mode 100755 index 0000000..8224290 --- /dev/null +++ b/simulation-bin/run_binary_paper2/organization_noLTD/3rd/OVERLAP15 no BC, no ABC/run @@ -0,0 +1,5 @@ +#!/bin/sh + +# t = 15 ... 28810, learning C and consolidating +./net.out -Nl=50 -Nl_inh=25 -t_max=28812 -N_stim=25 -pc=0.1 -zmax=1 -learn=TRIPLETat16.0 -recall=F100D1at28810.0 -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="3rd CA" + diff --git a/simulation-bin/run_binary_paper2/organization_noLTD/3rd/OVERLAP15/net.out b/simulation-bin/run_binary_paper2/organization_noLTD/3rd/OVERLAP15/net.out new file mode 100755 index 0000000..ef9ce11 Binary files /dev/null and b/simulation-bin/run_binary_paper2/organization_noLTD/3rd/OVERLAP15/net.out differ diff --git a/simulation-bin/run_binary_paper2/organization_noLTD/3rd/OVERLAP15/run b/simulation-bin/run_binary_paper2/organization_noLTD/3rd/OVERLAP15/run new file mode 100755 index 0000000..8224290 --- /dev/null +++ b/simulation-bin/run_binary_paper2/organization_noLTD/3rd/OVERLAP15/run @@ -0,0 +1,5 @@ +#!/bin/sh + +# t = 15 ... 28810, learning C and consolidating +./net.out -Nl=50 -Nl_inh=25 -t_max=28812 -N_stim=25 -pc=0.1 -zmax=1 -learn=TRIPLETat16.0 -recall=F100D1at28810.0 -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="3rd CA" + diff --git a/simulation-bin/run_binary_paper2/organization_noLTD/3rd/OVERLAP20 no ABC/net.out b/simulation-bin/run_binary_paper2/organization_noLTD/3rd/OVERLAP20 no ABC/net.out new file mode 100755 index 0000000..e66895e Binary files /dev/null and b/simulation-bin/run_binary_paper2/organization_noLTD/3rd/OVERLAP20 no ABC/net.out differ diff --git a/simulation-bin/run_binary_paper2/organization_noLTD/3rd/OVERLAP20 no ABC/run b/simulation-bin/run_binary_paper2/organization_noLTD/3rd/OVERLAP20 no ABC/run new file mode 100755 index 0000000..8224290 --- /dev/null +++ b/simulation-bin/run_binary_paper2/organization_noLTD/3rd/OVERLAP20 no ABC/run @@ -0,0 +1,5 @@ +#!/bin/sh + +# t = 15 ... 28810, learning C and consolidating +./net.out -Nl=50 -Nl_inh=25 -t_max=28812 -N_stim=25 -pc=0.1 -zmax=1 -learn=TRIPLETat16.0 -recall=F100D1at28810.0 -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="3rd CA" + diff --git a/simulation-bin/run_binary_paper2/organization_noLTD/3rd/OVERLAP20 no AC, no ABC/net.out b/simulation-bin/run_binary_paper2/organization_noLTD/3rd/OVERLAP20 no AC, no ABC/net.out new file mode 100755 index 0000000..13ce318 Binary files /dev/null and b/simulation-bin/run_binary_paper2/organization_noLTD/3rd/OVERLAP20 no AC, no ABC/net.out differ diff --git a/simulation-bin/run_binary_paper2/organization_noLTD/3rd/OVERLAP20 no AC, no ABC/run b/simulation-bin/run_binary_paper2/organization_noLTD/3rd/OVERLAP20 no AC, no ABC/run new file mode 100755 index 0000000..8224290 --- /dev/null +++ b/simulation-bin/run_binary_paper2/organization_noLTD/3rd/OVERLAP20 no AC, no ABC/run @@ -0,0 +1,5 @@ +#!/bin/sh + +# t = 15 ... 28810, learning C and consolidating +./net.out -Nl=50 -Nl_inh=25 -t_max=28812 -N_stim=25 -pc=0.1 -zmax=1 -learn=TRIPLETat16.0 -recall=F100D1at28810.0 -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="3rd CA" + diff --git a/simulation-bin/run_binary_paper2/organization_noLTD/3rd/OVERLAP20 no BC, no ABC/net.out b/simulation-bin/run_binary_paper2/organization_noLTD/3rd/OVERLAP20 no BC, no ABC/net.out new file mode 100755 index 0000000..cd5f070 Binary files /dev/null and b/simulation-bin/run_binary_paper2/organization_noLTD/3rd/OVERLAP20 no BC, no ABC/net.out differ diff --git a/simulation-bin/run_binary_paper2/organization_noLTD/3rd/OVERLAP20 no BC, no ABC/run b/simulation-bin/run_binary_paper2/organization_noLTD/3rd/OVERLAP20 no BC, no ABC/run new file mode 100755 index 0000000..8224290 --- /dev/null +++ b/simulation-bin/run_binary_paper2/organization_noLTD/3rd/OVERLAP20 no BC, no ABC/run @@ -0,0 +1,5 @@ +#!/bin/sh + +# t = 15 ... 28810, learning C and consolidating +./net.out -Nl=50 -Nl_inh=25 -t_max=28812 -N_stim=25 -pc=0.1 -zmax=1 -learn=TRIPLETat16.0 -recall=F100D1at28810.0 -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="3rd CA" + diff --git a/simulation-bin/run_binary_paper2/organization_noLTD/3rd/OVERLAP20/net.out b/simulation-bin/run_binary_paper2/organization_noLTD/3rd/OVERLAP20/net.out new file mode 100755 index 0000000..282932d Binary files /dev/null and b/simulation-bin/run_binary_paper2/organization_noLTD/3rd/OVERLAP20/net.out differ diff --git a/simulation-bin/run_binary_paper2/organization_noLTD/3rd/OVERLAP20/run b/simulation-bin/run_binary_paper2/organization_noLTD/3rd/OVERLAP20/run new file mode 100755 index 0000000..8224290 --- /dev/null +++ b/simulation-bin/run_binary_paper2/organization_noLTD/3rd/OVERLAP20/run @@ -0,0 +1,5 @@ +#!/bin/sh + +# t = 15 ... 28810, learning C and consolidating +./net.out -Nl=50 -Nl_inh=25 -t_max=28812 -N_stim=25 -pc=0.1 -zmax=1 -learn=TRIPLETat16.0 -recall=F100D1at28810.0 -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="3rd CA" + diff --git a/simulation-bin/run_binary_paper2/organization_noLTD/3rd/runner_NOOVERLAP b/simulation-bin/run_binary_paper2/organization_noLTD/3rd/runner_NOOVERLAP new file mode 100644 index 0000000..070aa10 --- /dev/null +++ b/simulation-bin/run_binary_paper2/organization_noLTD/3rd/runner_NOOVERLAP @@ -0,0 +1,10 @@ +#!/bin/sh +learned_dir="../2nd" + +### NO ### +cd NOOVERLAP +cp "../$learned_dir/NOOVERLAP/"*/*"_connections.txt" connections.txt +cp "../$learned_dir/NOOVERLAP/"*"/saved_state.txt" saved_state.txt +screen -d -m /bin/sh run + +cd .. diff --git a/simulation-bin/run_binary_paper2/organization_noLTD/3rd/runner_OVERLAP10 b/simulation-bin/run_binary_paper2/organization_noLTD/3rd/runner_OVERLAP10 new file mode 100644 index 0000000..b3f9c99 --- /dev/null +++ b/simulation-bin/run_binary_paper2/organization_noLTD/3rd/runner_OVERLAP10 @@ -0,0 +1,22 @@ +#!/bin/sh +learned_dir="../2nd" + +### 10 ### +cd OVERLAP10 +cp "../$learned_dir/OVERLAP10/"*/*"_connections.txt" connections.txt +cp "../$learned_dir/OVERLAP10/"*"/saved_state.txt" saved_state.txt +screen -d -m /bin/sh run +cd "../OVERLAP10 no ABC" +cp "../$learned_dir/OVERLAP10/"*/*"_connections.txt" connections.txt +cp "../$learned_dir/OVERLAP10/"*"/saved_state.txt" saved_state.txt +screen -d -m /bin/sh run +cd "../OVERLAP10 no AC, no ABC" +cp "../$learned_dir/OVERLAP10/"*/*"_connections.txt" connections.txt +cp "../$learned_dir/OVERLAP10/"*"/saved_state.txt" saved_state.txt +screen -d -m /bin/sh run +cd "../OVERLAP10 no BC, no ABC" +cp "../$learned_dir/OVERLAP10/"*/*"_connections.txt" connections.txt +cp "../$learned_dir/OVERLAP10/"*"/saved_state.txt" saved_state.txt +screen -d -m /bin/sh run + +cd .. diff --git a/simulation-bin/run_binary_paper2/organization_noLTD/3rd/runner_OVERLAP15 b/simulation-bin/run_binary_paper2/organization_noLTD/3rd/runner_OVERLAP15 new file mode 100644 index 0000000..e8df3ba --- /dev/null +++ b/simulation-bin/run_binary_paper2/organization_noLTD/3rd/runner_OVERLAP15 @@ -0,0 +1,22 @@ +#!/bin/sh +learned_dir="../2nd" + +### 10 ### +cd OVERLAP15 +cp "../$learned_dir/OVERLAP15/"*/*"_connections.txt" connections.txt +cp "../$learned_dir/OVERLAP15/"*"/saved_state.txt" saved_state.txt +screen -d -m /bin/sh run +cd "../OVERLAP15 no ABC" +cp "../$learned_dir/OVERLAP15/"*/*"_connections.txt" connections.txt +cp "../$learned_dir/OVERLAP15/"*"/saved_state.txt" saved_state.txt +screen -d -m /bin/sh run +cd "../OVERLAP15 no AC, no ABC" +cp "../$learned_dir/OVERLAP15/"*/*"_connections.txt" connections.txt +cp "../$learned_dir/OVERLAP15/"*"/saved_state.txt" saved_state.txt +screen -d -m /bin/sh run +cd "../OVERLAP15 no BC, no ABC" +cp "../$learned_dir/OVERLAP15/"*/*"_connections.txt" connections.txt +cp "../$learned_dir/OVERLAP15/"*"/saved_state.txt" saved_state.txt +screen -d -m /bin/sh run + +cd .. diff --git a/simulation-bin/run_binary_paper2/organization_noLTD/3rd/runner_OVERLAP20 b/simulation-bin/run_binary_paper2/organization_noLTD/3rd/runner_OVERLAP20 new file mode 100644 index 0000000..9b1c4c3 --- /dev/null +++ b/simulation-bin/run_binary_paper2/organization_noLTD/3rd/runner_OVERLAP20 @@ -0,0 +1,22 @@ +#!/bin/sh +learned_dir="../2nd" + +### 10 ### +cd OVERLAP20 +cp "../$learned_dir/OVERLAP20/"*/*"_connections.txt" connections.txt +cp "../$learned_dir/OVERLAP20/"*"/saved_state.txt" saved_state.txt +screen -d -m /bin/sh run +cd "../OVERLAP20 no ABC" +cp "../$learned_dir/OVERLAP20/"*/*"_connections.txt" connections.txt +cp "../$learned_dir/OVERLAP20/"*"/saved_state.txt" saved_state.txt +screen -d -m /bin/sh run +cd "../OVERLAP20 no AC, no ABC" +cp "../$learned_dir/OVERLAP20/"*/*"_connections.txt" connections.txt +cp "../$learned_dir/OVERLAP20/"*"/saved_state.txt" saved_state.txt +screen -d -m /bin/sh run +cd "../OVERLAP20 no BC, no ABC" +cp "../$learned_dir/OVERLAP20/"*/*"_connections.txt" connections.txt +cp "../$learned_dir/OVERLAP20/"*"/saved_state.txt" saved_state.txt +screen -d -m /bin/sh run + +cd .. diff --git a/simulation-bin/run_binary_paper2/priming_and_activation/1. Priming/NOOVERLAP/A/run b/simulation-bin/run_binary_paper2/priming_and_activation/1. Priming/NOOVERLAP/A/run new file mode 100644 index 0000000..ef207bb --- /dev/null +++ b/simulation-bin/run_binary_paper2/priming_and_activation/1. Priming/NOOVERLAP/A/run @@ -0,0 +1,13 @@ +#!/bin/sh + +./net.out -Nl=50 -Nl_inh=25 -t_max=25300 -N_stim=25 -pc=0.1 -learn= -recall=F100D1at10 -zmax=1 -r=0.5 -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Priming by recall" + +cd "../../../2. Switching after 10 min" +/bin/sh runner_NOOVERLAP_A +cd "../3. Switching after 1 h" +/bin/sh runner_NOOVERLAP_A +cd "../4. Switching after 4 h" +/bin/sh runner_NOOVERLAP_A +cd "../5. Switching after 7 h" +/bin/sh runner_NOOVERLAP_A + diff --git a/simulation-bin/run_binary_paper2/priming_and_activation/1. Priming/NOOVERLAP/B/run b/simulation-bin/run_binary_paper2/priming_and_activation/1. Priming/NOOVERLAP/B/run new file mode 100644 index 0000000..9489e8d --- /dev/null +++ b/simulation-bin/run_binary_paper2/priming_and_activation/1. Priming/NOOVERLAP/B/run @@ -0,0 +1,13 @@ +#!/bin/sh + +./net.out -Nl=50 -Nl_inh=25 -t_max=25300 -N_stim=25 -pc=0.1 -learn= -recall=F100D1at10 -zmax=1 -r=0.5 -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Priming by recall" + +cd "../../../2. Switching after 10 min" +/bin/sh runner_NOOVERLAP_B +cd "../3. Switching after 1 h" +/bin/sh runner_NOOVERLAP_B +cd "../4. Switching after 4 h" +/bin/sh runner_NOOVERLAP_B +cd "../5. Switching after 7 h" +/bin/sh runner_NOOVERLAP_B + diff --git a/simulation-bin/run_binary_paper2/priming_and_activation/1. Priming/NOOVERLAP/C/run b/simulation-bin/run_binary_paper2/priming_and_activation/1. Priming/NOOVERLAP/C/run new file mode 100644 index 0000000..1418948 --- /dev/null +++ b/simulation-bin/run_binary_paper2/priming_and_activation/1. Priming/NOOVERLAP/C/run @@ -0,0 +1,13 @@ +#!/bin/sh + +./net.out -Nl=50 -Nl_inh=25 -t_max=25300 -N_stim=25 -pc=0.1 -learn= -recall=F100D1at10 -zmax=1 -r=0.5 -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Priming by recall" + +cd "../../../2. Switching after 10 min" +/bin/sh runner_NOOVERLAP_C +cd "../3. Switching after 1 h" +/bin/sh runner_NOOVERLAP_C +cd "../4. Switching after 4 h" +/bin/sh runner_NOOVERLAP_C +cd "../5. Switching after 7 h" +/bin/sh runner_NOOVERLAP_C + diff --git a/simulation-bin/run_binary_paper2/priming_and_activation/1. Priming/OVERLAP10 no AC, no ABC/A/run b/simulation-bin/run_binary_paper2/priming_and_activation/1. Priming/OVERLAP10 no AC, no ABC/A/run new file mode 100644 index 0000000..f37b28b --- /dev/null +++ b/simulation-bin/run_binary_paper2/priming_and_activation/1. Priming/OVERLAP10 no AC, no ABC/A/run @@ -0,0 +1,13 @@ +#!/bin/sh + +./net.out -Nl=50 -Nl_inh=25 -t_max=25300 -N_stim=25 -pc=0.1 -learn= -recall=F100D1at10 -zmax=1 -r=0.5 -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Priming by recall" + +cd "../../../2. Switching after 10 min" +/bin/sh "runner_OVERLAP10 no AC, no ABC_A" +cd "../3. Switching after 1 h" +/bin/sh "runner_OVERLAP10 no AC, no ABC_A" +cd "../4. Switching after 4 h" +/bin/sh "runner_OVERLAP10 no AC, no ABC_A" +cd "../5. Switching after 7 h" +/bin/sh "runner_OVERLAP10 no AC, no ABC_A" + diff --git a/simulation-bin/run_binary_paper2/priming_and_activation/1. Priming/OVERLAP10 no AC, no ABC/B/run b/simulation-bin/run_binary_paper2/priming_and_activation/1. Priming/OVERLAP10 no AC, no ABC/B/run new file mode 100644 index 0000000..e5bc009 --- /dev/null +++ b/simulation-bin/run_binary_paper2/priming_and_activation/1. Priming/OVERLAP10 no AC, no ABC/B/run @@ -0,0 +1,13 @@ +#!/bin/sh + +./net.out -Nl=50 -Nl_inh=25 -t_max=25300 -N_stim=25 -pc=0.1 -learn= -recall=F100D1at10 -zmax=1 -r=0.5 -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Priming by recall" + +cd "../../../2. Switching after 10 min" +/bin/sh "runner_OVERLAP10 no AC, no ABC_B" +cd "../3. Switching after 1 h" +/bin/sh "runner_OVERLAP10 no AC, no ABC_B" +cd "../4. Switching after 4 h" +/bin/sh "runner_OVERLAP10 no AC, no ABC_B" +cd "../5. Switching after 7 h" +/bin/sh "runner_OVERLAP10 no AC, no ABC_B" + diff --git a/simulation-bin/run_binary_paper2/priming_and_activation/1. Priming/OVERLAP10 no AC, no ABC/C/run b/simulation-bin/run_binary_paper2/priming_and_activation/1. Priming/OVERLAP10 no AC, no ABC/C/run new file mode 100644 index 0000000..cfb3b10 --- /dev/null +++ b/simulation-bin/run_binary_paper2/priming_and_activation/1. Priming/OVERLAP10 no AC, no ABC/C/run @@ -0,0 +1,13 @@ +#!/bin/sh + +./net.out -Nl=50 -Nl_inh=25 -t_max=25300 -N_stim=25 -pc=0.1 -learn= -recall=F100D1at10 -zmax=1 -r=0.5 -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Priming by recall" + +cd "../../../2. Switching after 10 min" +/bin/sh "runner_OVERLAP10 no AC, no ABC_C" +cd "../3. Switching after 1 h" +/bin/sh "runner_OVERLAP10 no AC, no ABC_C" +cd "../4. Switching after 4 h" +/bin/sh "runner_OVERLAP10 no AC, no ABC_C" +cd "../5. Switching after 7 h" +/bin/sh "runner_OVERLAP10 no AC, no ABC_C" + diff --git a/simulation-bin/run_binary_paper2/priming_and_activation/1. Priming/OVERLAP10 no BC, no ABC/A/run b/simulation-bin/run_binary_paper2/priming_and_activation/1. Priming/OVERLAP10 no BC, no ABC/A/run new file mode 100644 index 0000000..e601e38 --- /dev/null +++ b/simulation-bin/run_binary_paper2/priming_and_activation/1. Priming/OVERLAP10 no BC, no ABC/A/run @@ -0,0 +1,13 @@ +#!/bin/sh + +./net.out -Nl=50 -Nl_inh=25 -t_max=25300 -N_stim=25 -pc=0.1 -learn= -recall=F100D1at10 -zmax=1 -r=0.5 -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Priming by recall" + +cd "../../../2. Switching after 10 min" +/bin/sh "runner_OVERLAP10 no BC, no ABC_A" +cd "../3. Switching after 1 h" +/bin/sh "runner_OVERLAP10 no BC, no ABC_A" +cd "../4. Switching after 4 h" +/bin/sh "runner_OVERLAP10 no BC, no ABC_A" +cd "../5. Switching after 7 h" +/bin/sh "runner_OVERLAP10 no BC, no ABC_A" + diff --git a/simulation-bin/run_binary_paper2/priming_and_activation/1. Priming/OVERLAP10 no BC, no ABC/B/run b/simulation-bin/run_binary_paper2/priming_and_activation/1. Priming/OVERLAP10 no BC, no ABC/B/run new file mode 100644 index 0000000..3a4206d --- /dev/null +++ b/simulation-bin/run_binary_paper2/priming_and_activation/1. Priming/OVERLAP10 no BC, no ABC/B/run @@ -0,0 +1,13 @@ +#!/bin/sh + +./net.out -Nl=50 -Nl_inh=25 -t_max=25300 -N_stim=25 -pc=0.1 -learn= -recall=F100D1at10 -zmax=1 -r=0.5 -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Priming by recall" + +cd "../../../2. Switching after 10 min" +/bin/sh "runner_OVERLAP10 no BC, no ABC_B" +cd "../3. Switching after 1 h" +/bin/sh "runner_OVERLAP10 no BC, no ABC_B" +cd "../4. Switching after 4 h" +/bin/sh "runner_OVERLAP10 no BC, no ABC_B" +cd "../5. Switching after 7 h" +/bin/sh "runner_OVERLAP10 no BC, no ABC_B" + diff --git a/simulation-bin/run_binary_paper2/priming_and_activation/1. Priming/OVERLAP10 no BC, no ABC/C/run b/simulation-bin/run_binary_paper2/priming_and_activation/1. Priming/OVERLAP10 no BC, no ABC/C/run new file mode 100644 index 0000000..f486947 --- /dev/null +++ b/simulation-bin/run_binary_paper2/priming_and_activation/1. Priming/OVERLAP10 no BC, no ABC/C/run @@ -0,0 +1,13 @@ +#!/bin/sh + +./net.out -Nl=50 -Nl_inh=25 -t_max=25300 -N_stim=25 -pc=0.1 -learn= -recall=F100D1at10 -zmax=1 -r=0.5 -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Priming by recall" + +cd "../../../2. Switching after 10 min" +/bin/sh "runner_OVERLAP10 no BC, no ABC_C" +cd "../3. Switching after 1 h" +/bin/sh "runner_OVERLAP10 no BC, no ABC_C" +cd "../4. Switching after 4 h" +/bin/sh "runner_OVERLAP10 no BC, no ABC_C" +cd "../5. Switching after 7 h" +/bin/sh "runner_OVERLAP10 no BC, no ABC_C" + diff --git a/simulation-bin/run_binary_paper2/priming_and_activation/1. Priming/OVERLAP10/A/run b/simulation-bin/run_binary_paper2/priming_and_activation/1. Priming/OVERLAP10/A/run new file mode 100644 index 0000000..af7d7f0 --- /dev/null +++ b/simulation-bin/run_binary_paper2/priming_and_activation/1. Priming/OVERLAP10/A/run @@ -0,0 +1,13 @@ +#!/bin/sh + +./net.out -Nl=50 -Nl_inh=25 -t_max=25300 -N_stim=25 -pc=0.1 -learn= -recall=F100D1at10 -zmax=1 -r=0.5 -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Priming by recall" + +cd "../../../2. Switching after 10 min" +/bin/sh runner_OVERLAP10_A +cd "../3. Switching after 1 h" +/bin/sh runner_OVERLAP10_A +cd "../4. Switching after 4 h" +/bin/sh runner_OVERLAP10_A +cd "../5. Switching after 7 h" +/bin/sh runner_OVERLAP10_A + diff --git a/simulation-bin/run_binary_paper2/priming_and_activation/1. Priming/OVERLAP10/B/run b/simulation-bin/run_binary_paper2/priming_and_activation/1. Priming/OVERLAP10/B/run new file mode 100644 index 0000000..a28fcaf --- /dev/null +++ b/simulation-bin/run_binary_paper2/priming_and_activation/1. Priming/OVERLAP10/B/run @@ -0,0 +1,13 @@ +#!/bin/sh + +./net.out -Nl=50 -Nl_inh=25 -t_max=25300 -N_stim=25 -pc=0.1 -learn= -recall=F100D1at10 -zmax=1 -r=0.5 -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Priming by recall" + +cd "../../../2. Switching after 10 min" +/bin/sh runner_OVERLAP10_B +cd "../3. Switching after 1 h" +/bin/sh runner_OVERLAP10_B +cd "../4. Switching after 4 h" +/bin/sh runner_OVERLAP10_B +cd "../5. Switching after 7 h" +/bin/sh runner_OVERLAP10_B + diff --git a/simulation-bin/run_binary_paper2/priming_and_activation/1. Priming/OVERLAP10/C/run b/simulation-bin/run_binary_paper2/priming_and_activation/1. Priming/OVERLAP10/C/run new file mode 100644 index 0000000..e07d538 --- /dev/null +++ b/simulation-bin/run_binary_paper2/priming_and_activation/1. Priming/OVERLAP10/C/run @@ -0,0 +1,13 @@ +#!/bin/sh + +./net.out -Nl=50 -Nl_inh=25 -t_max=25300 -N_stim=25 -pc=0.1 -learn= -recall=F100D1at10 -zmax=1 -r=0.5 -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Priming by recall" + +cd "../../../2. Switching after 10 min" +/bin/sh runner_OVERLAP10_C +cd "../3. Switching after 1 h" +/bin/sh runner_OVERLAP10_C +cd "../4. Switching after 4 h" +/bin/sh runner_OVERLAP10_C +cd "../5. Switching after 7 h" +/bin/sh runner_OVERLAP10_C + diff --git a/simulation-bin/run_binary_paper2/priming_and_activation/1. Priming/runner b/simulation-bin/run_binary_paper2/priming_and_activation/1. Priming/runner new file mode 100644 index 0000000..18013ad --- /dev/null +++ b/simulation-bin/run_binary_paper2/priming_and_activation/1. Priming/runner @@ -0,0 +1,79 @@ +#!/bin/sh + +# first copies data from learning/consolidation simulation and then runs simulation to prime one of the assemblies; +# later, other simulations are run to investigate spontaneous activation + +# uses 'screen' to run process(es) in the background + +organization_dir="../../../organization" # re-uses "FIRST", "SECOND", and "THIRD" binaries from "organization" +learned_dir="../../../organization/3rd" # starts after the third assembly has been learned and consolidated + +# OVERLAP10 +cd OVERLAP10/A +cp "../$organization_dir/1st/FIRST/net.out" . +cp "../$learned_dir/OVERLAP10/connections.txt" . +cp "../$learned_dir/OVERLAP10/"*/*/*"_net_28810.0.txt" coupling_strengths.txt +screen -d -m /bin/sh run +cd ../B +cp "../$organization_dir/2nd/OVERLAP10/net.out" . +cp "../$learned_dir/OVERLAP10/connections.txt" . +cp "../$learned_dir/OVERLAP10/"*/*/*"_net_28810.0.txt" coupling_strengths.txt +screen -d -m /bin/sh run +cd ../C +cp "../$organization_dir/3rd/OVERLAP10/net.out" . +cp "../$learned_dir/OVERLAP10/connections.txt" . +cp "../$learned_dir/OVERLAP10/"*/*/*"_net_28810.0.txt" coupling_strengths.txt +screen -d -m /bin/sh run + +# OVERLAP10 no AC, no ABC +cd "../../OVERLAP10 no AC, no ABC/A" +cp "../$organization_dir/1st/FIRST/net.out" . +cp "../$learned_dir/OVERLAP10 no AC, no ABC/connections.txt" . +cp "../$learned_dir/OVERLAP10 no AC, no ABC/"*/*/*"_net_28810.0.txt" coupling_strengths.txt +screen -d -m /bin/sh run +cd "../B" +cp "../$organization_dir/2nd/OVERLAP10/net.out" . +cp "../$learned_dir/OVERLAP10 no AC, no ABC/connections.txt" . +cp "../$learned_dir/OVERLAP10 no AC, no ABC/"*/*/*"_net_28810.0.txt" coupling_strengths.txt +screen -d -m /bin/sh run +cd "../C" +cp "../$organization_dir/3rd/OVERLAP10 no AC, no ABC/net.out" . +cp "../$learned_dir/OVERLAP10 no AC, no ABC/connections.txt" . +cp "../$learned_dir/OVERLAP10 no AC, no ABC/"*/*/*"_net_28810.0.txt" coupling_strengths.txt +screen -d -m /bin/sh run + +# OVERLAP10 no BC, no ABC +cd "../../OVERLAP10 no BC, no ABC/A" +cp "../$organization_dir/1st/FIRST/net.out" . +cp "../$learned_dir/OVERLAP10 no BC, no ABC/connections.txt" . +cp "../$learned_dir/OVERLAP10 no BC, no ABC/"*/*/*"_net_28810.0.txt" coupling_strengths.txt +screen -d -m /bin/sh run +cd "../B" +cp "../$organization_dir/2nd/OVERLAP10/net.out" . +cp "../$learned_dir/OVERLAP10 no BC, no ABC/connections.txt" . +cp "../$learned_dir/OVERLAP10 no BC, no ABC/"*/*/*"_net_28810.0.txt" coupling_strengths.txt +screen -d -m /bin/sh run +cd "../C" +cp "../$organization_dir/3rd/OVERLAP10 no BC, no ABC/net.out" . +cp "../$learned_dir/OVERLAP10 no BC, no ABC/connections.txt" . +cp "../$learned_dir/OVERLAP10 no BC, no ABC/"*/*/*"_net_28810.0.txt" coupling_strengths.txt +screen -d -m /bin/sh run + +# NOOVERLAP +cd "../../NOOVERLAP/A" +cp "../$organization_dir/1st/FIRST/net.out" . +cp "../$learned_dir/NOOVERLAP/connections.txt" . +cp "../$learned_dir/NOOVERLAP/"*/*/*"_net_28810.0.txt" coupling_strengths.txt +screen -d -m /bin/sh run +cd "../B" +cp "../$organization_dir/2nd/NOOVERLAP/net.out" . +cp "../$learned_dir/NOOVERLAP/connections.txt" . +cp "../$learned_dir/NOOVERLAP/"*/*/*"_net_28810.0.txt" coupling_strengths.txt +screen -d -m /bin/sh run +cd "../C" +cp "../$organization_dir/3rd/NOOVERLAP/net.out" . +cp "../$learned_dir/NOOVERLAP/connections.txt" . +cp "../$learned_dir/NOOVERLAP/"*/*/*"_net_28810.0.txt" coupling_strengths.txt +screen -d -m /bin/sh run + +cd ../.. diff --git a/simulation-bin/run_binary_paper2/priming_and_activation/2. Switching after 10 min/NOOVERLAP/run b/simulation-bin/run_binary_paper2/priming_and_activation/2. Switching after 10 min/NOOVERLAP/run new file mode 100644 index 0000000..3c63a5b --- /dev/null +++ b/simulation-bin/run_binary_paper2/priming_and_activation/2. Switching after 10 min/NOOVERLAP/run @@ -0,0 +1,15 @@ +#!/bin/sh + +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." + +python3 assemblyAvalancheStatistics.py "NOOVERLAP" 0.01 10 False + diff --git a/simulation-bin/run_binary_paper2/priming_and_activation/2. Switching after 10 min/OVERLAP10 no AC, no ABC/run b/simulation-bin/run_binary_paper2/priming_and_activation/2. Switching after 10 min/OVERLAP10 no AC, no ABC/run new file mode 100644 index 0000000..406faf4 --- /dev/null +++ b/simulation-bin/run_binary_paper2/priming_and_activation/2. Switching after 10 min/OVERLAP10 no AC, no ABC/run @@ -0,0 +1,15 @@ +#!/bin/sh + +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." + +python3 assemblyAvalancheStatistics.py "OVERLAP10 no AC, no ABC" 0.01 10 False + diff --git a/simulation-bin/run_binary_paper2/priming_and_activation/2. Switching after 10 min/OVERLAP10 no BC, no ABC/run b/simulation-bin/run_binary_paper2/priming_and_activation/2. Switching after 10 min/OVERLAP10 no BC, no ABC/run new file mode 100644 index 0000000..0f2553e --- /dev/null +++ b/simulation-bin/run_binary_paper2/priming_and_activation/2. Switching after 10 min/OVERLAP10 no BC, no ABC/run @@ -0,0 +1,15 @@ +#!/bin/sh + +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." + +python3 assemblyAvalancheStatistics.py "OVERLAP10 no BC, no ABC" 0.01 10 False + diff --git a/simulation-bin/run_binary_paper2/priming_and_activation/2. Switching after 10 min/OVERLAP10/run b/simulation-bin/run_binary_paper2/priming_and_activation/2. Switching after 10 min/OVERLAP10/run new file mode 100644 index 0000000..d0ad253 --- /dev/null +++ b/simulation-bin/run_binary_paper2/priming_and_activation/2. Switching after 10 min/OVERLAP10/run @@ -0,0 +1,15 @@ +#!/bin/sh + +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." + +python3 assemblyAvalancheStatistics.py "OVERLAP10" 0.01 10 False + diff --git a/simulation-bin/run_binary_paper2/priming_and_activation/2. Switching after 10 min/runner_NOOVERLAP_A b/simulation-bin/run_binary_paper2/priming_and_activation/2. Switching after 10 min/runner_NOOVERLAP_A new file mode 100644 index 0000000..b2853ca --- /dev/null +++ b/simulation-bin/run_binary_paper2/priming_and_activation/2. Switching after 10 min/runner_NOOVERLAP_A @@ -0,0 +1,13 @@ +#!/bin/sh + +cd NOOVERLAP/A +cp "../../../../activation/net_activation.out" . +cp "../run" . +cp "../../../1. Priming/NOOVERLAP/A/connections.txt" connections.txt +cp "../../../1. Priming/NOOVERLAP/A/"*/network_plots/*_net_620.0.txt coupling_strengths.txt +cp "../../../../../../analysis/overlapParadigms.py" . +cp "../../../../../../analysis/utilityFunctions.py" . +cp "../../../../../../analysis/assemblyAvalancheStatistics.py" . +screen -d -m /bin/sh run + +cd ../.. diff --git a/simulation-bin/run_binary_paper2/priming_and_activation/2. Switching after 10 min/runner_NOOVERLAP_B b/simulation-bin/run_binary_paper2/priming_and_activation/2. Switching after 10 min/runner_NOOVERLAP_B new file mode 100644 index 0000000..9255c52 --- /dev/null +++ b/simulation-bin/run_binary_paper2/priming_and_activation/2. Switching after 10 min/runner_NOOVERLAP_B @@ -0,0 +1,13 @@ +#!/bin/sh + +cd NOOVERLAP/B +cp "../../../../activation/net_activation.out" . +cp "../run" . +cp "../../../1. Priming/NOOVERLAP/B/connections.txt" connections.txt +cp "../../../1. Priming/NOOVERLAP/B/"*/network_plots/*_net_620.0.txt coupling_strengths.txt +cp "../../../../../../analysis/overlapParadigms.py" . +cp "../../../../../../analysis/utilityFunctions.py" . +cp "../../../../../../analysis/assemblyAvalancheStatistics.py" . +screen -d -m /bin/sh run + +cd ../.. diff --git a/simulation-bin/run_binary_paper2/priming_and_activation/2. Switching after 10 min/runner_NOOVERLAP_C b/simulation-bin/run_binary_paper2/priming_and_activation/2. Switching after 10 min/runner_NOOVERLAP_C new file mode 100644 index 0000000..aea88cf --- /dev/null +++ b/simulation-bin/run_binary_paper2/priming_and_activation/2. Switching after 10 min/runner_NOOVERLAP_C @@ -0,0 +1,13 @@ +#!/bin/sh + +cd NOOVERLAP/C +cp "../../../../activation/net_activation.out" . +cp "../run" . +cp "../../../1. Priming/NOOVERLAP/C/connections.txt" connections.txt +cp "../../../1. Priming/NOOVERLAP/C/"*/network_plots/*_net_620.0.txt coupling_strengths.txt +cp "../../../../../../analysis/overlapParadigms.py" . +cp "../../../../../../analysis/utilityFunctions.py" . +cp "../../../../../../analysis/assemblyAvalancheStatistics.py" . +screen -d -m /bin/sh run + +cd ../.. diff --git a/simulation-bin/run_binary_paper2/priming_and_activation/2. Switching after 10 min/runner_OVERLAP10 no AC, no ABC_A b/simulation-bin/run_binary_paper2/priming_and_activation/2. Switching after 10 min/runner_OVERLAP10 no AC, no ABC_A new file mode 100644 index 0000000..12d55bc --- /dev/null +++ b/simulation-bin/run_binary_paper2/priming_and_activation/2. Switching after 10 min/runner_OVERLAP10 no AC, no ABC_A @@ -0,0 +1,13 @@ +#!/bin/sh + +cd "OVERLAP10 no AC, no ABC/A" +cp "../../../../activation/net_activation.out" . +cp "../run" . +cp "../../../1. Priming/OVERLAP10 no AC, no ABC/A/connections.txt" connections.txt +cp "../../../1. Priming/OVERLAP10 no AC, no ABC/A/"*/network_plots/*_net_620.0.txt coupling_strengths.txt +cp "../../../../../../analysis/overlapParadigms.py" . +cp "../../../../../../analysis/utilityFunctions.py" . +cp "../../../../../../analysis/assemblyAvalancheStatistics.py" . +screen -d -m /bin/sh run + +cd ../.. diff --git a/simulation-bin/run_binary_paper2/priming_and_activation/2. Switching after 10 min/runner_OVERLAP10 no AC, no ABC_B b/simulation-bin/run_binary_paper2/priming_and_activation/2. Switching after 10 min/runner_OVERLAP10 no AC, no ABC_B new file mode 100644 index 0000000..4aafc77 --- /dev/null +++ b/simulation-bin/run_binary_paper2/priming_and_activation/2. Switching after 10 min/runner_OVERLAP10 no AC, no ABC_B @@ -0,0 +1,13 @@ +#!/bin/sh + +cd "OVERLAP10 no AC, no ABC/B" +cp "../../../../activation/net_activation.out" . +cp "../run" . +cp "../../../1. Priming/OVERLAP10 no AC, no ABC/B/connections.txt" connections.txt +cp "../../../1. Priming/OVERLAP10 no AC, no ABC/B/"*/network_plots/*_net_620.0.txt coupling_strengths.txt +cp "../../../../../../analysis/overlapParadigms.py" . +cp "../../../../../../analysis/utilityFunctions.py" . +cp "../../../../../../analysis/assemblyAvalancheStatistics.py" . +screen -d -m /bin/sh run + +cd ../.. diff --git a/simulation-bin/run_binary_paper2/priming_and_activation/2. Switching after 10 min/runner_OVERLAP10 no AC, no ABC_C b/simulation-bin/run_binary_paper2/priming_and_activation/2. Switching after 10 min/runner_OVERLAP10 no AC, no ABC_C new file mode 100644 index 0000000..d3cbce4 --- /dev/null +++ b/simulation-bin/run_binary_paper2/priming_and_activation/2. Switching after 10 min/runner_OVERLAP10 no AC, no ABC_C @@ -0,0 +1,13 @@ +#!/bin/sh + +cd "OVERLAP10 no AC, no ABC/C" +cp "../../../../activation/net_activation.out" . +cp "../run" . +cp "../../../1. Priming/OVERLAP10 no AC, no ABC/C/connections.txt" connections.txt +cp "../../../1. Priming/OVERLAP10 no AC, no ABC/C/"*/network_plots/*_net_620.0.txt coupling_strengths.txt +cp "../../../../../../analysis/overlapParadigms.py" . +cp "../../../../../../analysis/utilityFunctions.py" . +cp "../../../../../../analysis/assemblyAvalancheStatistics.py" . +screen -d -m /bin/sh run + +cd ../.. diff --git a/simulation-bin/run_binary_paper2/priming_and_activation/2. Switching after 10 min/runner_OVERLAP10 no BC, no ABC_A b/simulation-bin/run_binary_paper2/priming_and_activation/2. Switching after 10 min/runner_OVERLAP10 no BC, no ABC_A new file mode 100644 index 0000000..d67fc99 --- /dev/null +++ b/simulation-bin/run_binary_paper2/priming_and_activation/2. Switching after 10 min/runner_OVERLAP10 no BC, no ABC_A @@ -0,0 +1,13 @@ +#!/bin/sh + +cd "OVERLAP10 no BC, no ABC/A" +cp "../../../../activation/net_activation.out" . +cp "../run" . +cp "../../../1. Priming/OVERLAP10 no BC, no ABC/A/connections.txt" connections.txt +cp "../../../1. Priming/OVERLAP10 no BC, no ABC/A/"*/network_plots/*_net_620.0.txt coupling_strengths.txt +cp "../../../../../../analysis/overlapParadigms.py" . +cp "../../../../../../analysis/utilityFunctions.py" . +cp "../../../../../../analysis/assemblyAvalancheStatistics.py" . +screen -d -m /bin/sh run + +cd ../.. diff --git a/simulation-bin/run_binary_paper2/priming_and_activation/2. Switching after 10 min/runner_OVERLAP10 no BC, no ABC_B b/simulation-bin/run_binary_paper2/priming_and_activation/2. Switching after 10 min/runner_OVERLAP10 no BC, no ABC_B new file mode 100644 index 0000000..0eac72a --- /dev/null +++ b/simulation-bin/run_binary_paper2/priming_and_activation/2. Switching after 10 min/runner_OVERLAP10 no BC, no ABC_B @@ -0,0 +1,13 @@ +#!/bin/sh + +cd "OVERLAP10 no BC, no ABC/B" +cp "../../../../activation/net_activation.out" . +cp "../run" . +cp "../../../1. Priming/OVERLAP10 no BC, no ABC/B/connections.txt" connections.txt +cp "../../../1. Priming/OVERLAP10 no BC, no ABC/B/"*/network_plots/*_net_620.0.txt coupling_strengths.txt +cp "../../../../../../analysis/overlapParadigms.py" . +cp "../../../../../../analysis/utilityFunctions.py" . +cp "../../../../../../analysis/assemblyAvalancheStatistics.py" . +screen -d -m /bin/sh run + +cd ../.. diff --git a/simulation-bin/run_binary_paper2/priming_and_activation/2. Switching after 10 min/runner_OVERLAP10 no BC, no ABC_C b/simulation-bin/run_binary_paper2/priming_and_activation/2. Switching after 10 min/runner_OVERLAP10 no BC, no ABC_C new file mode 100644 index 0000000..82a2bca --- /dev/null +++ b/simulation-bin/run_binary_paper2/priming_and_activation/2. Switching after 10 min/runner_OVERLAP10 no BC, no ABC_C @@ -0,0 +1,13 @@ +#!/bin/sh + +cd "OVERLAP10 no BC, no ABC/C" +cp "../../../../activation/net_activation.out" . +cp "../run" . +cp "../../../1. Priming/OVERLAP10 no BC, no ABC/C/connections.txt" connections.txt +cp "../../../1. Priming/OVERLAP10 no BC, no ABC/C/"*/network_plots/*_net_620.0.txt coupling_strengths.txt +cp "../../../../../../analysis/overlapParadigms.py" . +cp "../../../../../../analysis/utilityFunctions.py" . +cp "../../../../../../analysis/assemblyAvalancheStatistics.py" . +screen -d -m /bin/sh run + +cd ../.. diff --git a/simulation-bin/run_binary_paper2/priming_and_activation/2. Switching after 10 min/runner_OVERLAP10_A b/simulation-bin/run_binary_paper2/priming_and_activation/2. Switching after 10 min/runner_OVERLAP10_A new file mode 100644 index 0000000..9bdcbe4 --- /dev/null +++ b/simulation-bin/run_binary_paper2/priming_and_activation/2. Switching after 10 min/runner_OVERLAP10_A @@ -0,0 +1,13 @@ +#!/bin/sh + +cd OVERLAP10/A +cp "../../../../activation/net_activation.out" . +cp "../run" . +cp "../../../1. Priming/OVERLAP10/A/connections.txt" connections.txt +cp "../../../1. Priming/OVERLAP10/A/"*/network_plots/*_net_620.0.txt coupling_strengths.txt +cp "../../../../../../analysis/overlapParadigms.py" . +cp "../../../../../../analysis/utilityFunctions.py" . +cp "../../../../../../analysis/assemblyAvalancheStatistics.py" . +screen -d -m /bin/sh run + +cd ../.. diff --git a/simulation-bin/run_binary_paper2/priming_and_activation/2. Switching after 10 min/runner_OVERLAP10_B b/simulation-bin/run_binary_paper2/priming_and_activation/2. Switching after 10 min/runner_OVERLAP10_B new file mode 100644 index 0000000..2bd8da2 --- /dev/null +++ b/simulation-bin/run_binary_paper2/priming_and_activation/2. Switching after 10 min/runner_OVERLAP10_B @@ -0,0 +1,13 @@ +#!/bin/sh + +cd OVERLAP10/B +cp "../../../../activation/net_activation.out" . +cp "../run" . +cp "../../../1. Priming/OVERLAP10/B/connections.txt" connections.txt +cp "../../../1. Priming/OVERLAP10/B/"*/network_plots/*_net_620.0.txt coupling_strengths.txt +cp "../../../../../../analysis/overlapParadigms.py" . +cp "../../../../../../analysis/utilityFunctions.py" . +cp "../../../../../../analysis/assemblyAvalancheStatistics.py" . +screen -d -m /bin/sh run + +cd ../.. diff --git a/simulation-bin/run_binary_paper2/priming_and_activation/2. Switching after 10 min/runner_OVERLAP10_C b/simulation-bin/run_binary_paper2/priming_and_activation/2. Switching after 10 min/runner_OVERLAP10_C new file mode 100644 index 0000000..c896294 --- /dev/null +++ b/simulation-bin/run_binary_paper2/priming_and_activation/2. Switching after 10 min/runner_OVERLAP10_C @@ -0,0 +1,13 @@ +#!/bin/sh + +cd OVERLAP10/C +cp "../../../../activation/net_activation.out" . +cp "../run" . +cp "../../../1. Priming/OVERLAP10/C/connections.txt" connections.txt +cp "../../../1. Priming/OVERLAP10/C/"*/network_plots/*_net_620.0.txt coupling_strengths.txt +cp "../../../../../../analysis/overlapParadigms.py" . +cp "../../../../../../analysis/utilityFunctions.py" . +cp "../../../../../../analysis/assemblyAvalancheStatistics.py" . +screen -d -m /bin/sh run + +cd ../.. diff --git a/simulation-bin/run_binary_paper2/priming_and_activation/3. Switching after 1 h/NOOVERLAP/run b/simulation-bin/run_binary_paper2/priming_and_activation/3. Switching after 1 h/NOOVERLAP/run new file mode 100644 index 0000000..3c63a5b --- /dev/null +++ b/simulation-bin/run_binary_paper2/priming_and_activation/3. Switching after 1 h/NOOVERLAP/run @@ -0,0 +1,15 @@ +#!/bin/sh + +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." + +python3 assemblyAvalancheStatistics.py "NOOVERLAP" 0.01 10 False + diff --git a/simulation-bin/run_binary_paper2/priming_and_activation/3. Switching after 1 h/OVERLAP10 no AC, no ABC/run b/simulation-bin/run_binary_paper2/priming_and_activation/3. Switching after 1 h/OVERLAP10 no AC, no ABC/run new file mode 100644 index 0000000..406faf4 --- /dev/null +++ b/simulation-bin/run_binary_paper2/priming_and_activation/3. Switching after 1 h/OVERLAP10 no AC, no ABC/run @@ -0,0 +1,15 @@ +#!/bin/sh + +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." + +python3 assemblyAvalancheStatistics.py "OVERLAP10 no AC, no ABC" 0.01 10 False + diff --git a/simulation-bin/run_binary_paper2/priming_and_activation/3. Switching after 1 h/OVERLAP10 no BC, no ABC/run b/simulation-bin/run_binary_paper2/priming_and_activation/3. Switching after 1 h/OVERLAP10 no BC, no ABC/run new file mode 100644 index 0000000..0f2553e --- /dev/null +++ b/simulation-bin/run_binary_paper2/priming_and_activation/3. Switching after 1 h/OVERLAP10 no BC, no ABC/run @@ -0,0 +1,15 @@ +#!/bin/sh + +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." + +python3 assemblyAvalancheStatistics.py "OVERLAP10 no BC, no ABC" 0.01 10 False + diff --git a/simulation-bin/run_binary_paper2/priming_and_activation/3. Switching after 1 h/OVERLAP10/run b/simulation-bin/run_binary_paper2/priming_and_activation/3. Switching after 1 h/OVERLAP10/run new file mode 100644 index 0000000..d0ad253 --- /dev/null +++ b/simulation-bin/run_binary_paper2/priming_and_activation/3. Switching after 1 h/OVERLAP10/run @@ -0,0 +1,15 @@ +#!/bin/sh + +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." + +python3 assemblyAvalancheStatistics.py "OVERLAP10" 0.01 10 False + diff --git a/simulation-bin/run_binary_paper2/priming_and_activation/3. Switching after 1 h/runner_NOOVERLAP_A b/simulation-bin/run_binary_paper2/priming_and_activation/3. Switching after 1 h/runner_NOOVERLAP_A new file mode 100644 index 0000000..8d0c3ef --- /dev/null +++ b/simulation-bin/run_binary_paper2/priming_and_activation/3. Switching after 1 h/runner_NOOVERLAP_A @@ -0,0 +1,13 @@ +#!/bin/sh + +cd NOOVERLAP/A +cp "../../../../activation/net_activation.out" . +cp "../run" . +cp "../../../1. Priming/NOOVERLAP/A/connections.txt" connections.txt +cp "../../../1. Priming/NOOVERLAP/A/"*/network_plots/*_net_3620.0.txt coupling_strengths.txt +cp "../../../../../../analysis/overlapParadigms.py" . +cp "../../../../../../analysis/utilityFunctions.py" . +cp "../../../../../../analysis/assemblyAvalancheStatistics.py" . +screen -d -m /bin/sh run + +cd ../.. diff --git a/simulation-bin/run_binary_paper2/priming_and_activation/3. Switching after 1 h/runner_NOOVERLAP_B b/simulation-bin/run_binary_paper2/priming_and_activation/3. Switching after 1 h/runner_NOOVERLAP_B new file mode 100644 index 0000000..f1ba8dc --- /dev/null +++ b/simulation-bin/run_binary_paper2/priming_and_activation/3. Switching after 1 h/runner_NOOVERLAP_B @@ -0,0 +1,13 @@ +#!/bin/sh + +cd NOOVERLAP/B +cp "../../../../activation/net_activation.out" . +cp "../run" . +cp "../../../1. Priming/NOOVERLAP/B/connections.txt" connections.txt +cp "../../../1. Priming/NOOVERLAP/B/"*/network_plots/*_net_3620.0.txt coupling_strengths.txt +cp "../../../../../../analysis/overlapParadigms.py" . +cp "../../../../../../analysis/utilityFunctions.py" . +cp "../../../../../../analysis/assemblyAvalancheStatistics.py" . +screen -d -m /bin/sh run + +cd ../.. diff --git a/simulation-bin/run_binary_paper2/priming_and_activation/3. Switching after 1 h/runner_NOOVERLAP_C b/simulation-bin/run_binary_paper2/priming_and_activation/3. Switching after 1 h/runner_NOOVERLAP_C new file mode 100644 index 0000000..d96ea03 --- /dev/null +++ b/simulation-bin/run_binary_paper2/priming_and_activation/3. Switching after 1 h/runner_NOOVERLAP_C @@ -0,0 +1,13 @@ +#!/bin/sh + +cd NOOVERLAP/C +cp "../../../../activation/net_activation.out" . +cp "../run" . +cp "../../../1. Priming/NOOVERLAP/C/connections.txt" connections.txt +cp "../../../1. Priming/NOOVERLAP/C/"*/network_plots/*_net_3620.0.txt coupling_strengths.txt +cp "../../../../../../analysis/overlapParadigms.py" . +cp "../../../../../../analysis/utilityFunctions.py" . +cp "../../../../../../analysis/assemblyAvalancheStatistics.py" . +screen -d -m /bin/sh run + +cd ../.. diff --git a/simulation-bin/run_binary_paper2/priming_and_activation/3. Switching after 1 h/runner_OVERLAP10 no AC, no ABC_A b/simulation-bin/run_binary_paper2/priming_and_activation/3. Switching after 1 h/runner_OVERLAP10 no AC, no ABC_A new file mode 100644 index 0000000..ef0117c --- /dev/null +++ b/simulation-bin/run_binary_paper2/priming_and_activation/3. Switching after 1 h/runner_OVERLAP10 no AC, no ABC_A @@ -0,0 +1,13 @@ +#!/bin/sh + +cd "OVERLAP10 no AC, no ABC/A" +cp "../../../../activation/net_activation.out" . +cp "../run" . +cp "../../../1. Priming/OVERLAP10 no AC, no ABC/A/connections.txt" connections.txt +cp "../../../1. Priming/OVERLAP10 no AC, no ABC/A/"*/network_plots/*_net_3620.0.txt coupling_strengths.txt +cp "../../../../../../analysis/overlapParadigms.py" . +cp "../../../../../../analysis/utilityFunctions.py" . +cp "../../../../../../analysis/assemblyAvalancheStatistics.py" . +screen -d -m /bin/sh run + +cd ../.. diff --git a/simulation-bin/run_binary_paper2/priming_and_activation/3. Switching after 1 h/runner_OVERLAP10 no AC, no ABC_B b/simulation-bin/run_binary_paper2/priming_and_activation/3. Switching after 1 h/runner_OVERLAP10 no AC, no ABC_B new file mode 100644 index 0000000..31c5885 --- /dev/null +++ b/simulation-bin/run_binary_paper2/priming_and_activation/3. Switching after 1 h/runner_OVERLAP10 no AC, no ABC_B @@ -0,0 +1,13 @@ +#!/bin/sh + +cd "OVERLAP10 no AC, no ABC/B" +cp "../../../../activation/net_activation.out" . +cp "../run" . +cp "../../../1. Priming/OVERLAP10 no AC, no ABC/B/connections.txt" connections.txt +cp "../../../1. Priming/OVERLAP10 no AC, no ABC/B/"*/network_plots/*_net_3620.0.txt coupling_strengths.txt +cp "../../../../../../analysis/overlapParadigms.py" . +cp "../../../../../../analysis/utilityFunctions.py" . +cp "../../../../../../analysis/assemblyAvalancheStatistics.py" . +screen -d -m /bin/sh run + +cd ../.. diff --git a/simulation-bin/run_binary_paper2/priming_and_activation/3. Switching after 1 h/runner_OVERLAP10 no AC, no ABC_C b/simulation-bin/run_binary_paper2/priming_and_activation/3. Switching after 1 h/runner_OVERLAP10 no AC, no ABC_C new file mode 100644 index 0000000..814f1c1 --- /dev/null +++ b/simulation-bin/run_binary_paper2/priming_and_activation/3. Switching after 1 h/runner_OVERLAP10 no AC, no ABC_C @@ -0,0 +1,13 @@ +#!/bin/sh + +cd "OVERLAP10 no AC, no ABC/C" +cp "../../../../activation/net_activation.out" . +cp "../run" . +cp "../../../1. Priming/OVERLAP10 no AC, no ABC/C/connections.txt" connections.txt +cp "../../../1. Priming/OVERLAP10 no AC, no ABC/C/"*/network_plots/*_net_3620.0.txt coupling_strengths.txt +cp "../../../../../../analysis/overlapParadigms.py" . +cp "../../../../../../analysis/utilityFunctions.py" . +cp "../../../../../../analysis/assemblyAvalancheStatistics.py" . +screen -d -m /bin/sh run + +cd ../.. diff --git a/simulation-bin/run_binary_paper2/priming_and_activation/3. Switching after 1 h/runner_OVERLAP10 no BC, no ABC_A b/simulation-bin/run_binary_paper2/priming_and_activation/3. Switching after 1 h/runner_OVERLAP10 no BC, no ABC_A new file mode 100644 index 0000000..9b6289a --- /dev/null +++ b/simulation-bin/run_binary_paper2/priming_and_activation/3. Switching after 1 h/runner_OVERLAP10 no BC, no ABC_A @@ -0,0 +1,13 @@ +#!/bin/sh + +cd "OVERLAP10 no BC, no ABC/A" +cp "../../../../activation/net_activation.out" . +cp "../run" . +cp "../../../1. Priming/OVERLAP10 no BC, no ABC/A/connections.txt" connections.txt +cp "../../../1. Priming/OVERLAP10 no BC, no ABC/A/"*/network_plots/*_net_3620.0.txt coupling_strengths.txt +cp "../../../../../../analysis/overlapParadigms.py" . +cp "../../../../../../analysis/utilityFunctions.py" . +cp "../../../../../../analysis/assemblyAvalancheStatistics.py" . +screen -d -m /bin/sh run + +cd ../.. diff --git a/simulation-bin/run_binary_paper2/priming_and_activation/3. Switching after 1 h/runner_OVERLAP10 no BC, no ABC_B b/simulation-bin/run_binary_paper2/priming_and_activation/3. Switching after 1 h/runner_OVERLAP10 no BC, no ABC_B new file mode 100644 index 0000000..09bb67b --- /dev/null +++ b/simulation-bin/run_binary_paper2/priming_and_activation/3. Switching after 1 h/runner_OVERLAP10 no BC, no ABC_B @@ -0,0 +1,13 @@ +#!/bin/sh + +cd "OVERLAP10 no BC, no ABC/B" +cp "../../../../activation/net_activation.out" . +cp "../run" . +cp "../../../1. Priming/OVERLAP10 no BC, no ABC/B/connections.txt" connections.txt +cp "../../../1. Priming/OVERLAP10 no BC, no ABC/B/"*/network_plots/*_net_3620.0.txt coupling_strengths.txt +cp "../../../../../../analysis/overlapParadigms.py" . +cp "../../../../../../analysis/utilityFunctions.py" . +cp "../../../../../../analysis/assemblyAvalancheStatistics.py" . +screen -d -m /bin/sh run + +cd ../.. diff --git a/simulation-bin/run_binary_paper2/priming_and_activation/3. Switching after 1 h/runner_OVERLAP10 no BC, no ABC_C b/simulation-bin/run_binary_paper2/priming_and_activation/3. Switching after 1 h/runner_OVERLAP10 no BC, no ABC_C new file mode 100644 index 0000000..96b2056 --- /dev/null +++ b/simulation-bin/run_binary_paper2/priming_and_activation/3. Switching after 1 h/runner_OVERLAP10 no BC, no ABC_C @@ -0,0 +1,13 @@ +#!/bin/sh + +cd "OVERLAP10 no BC, no ABC/C" +cp "../../../../activation/net_activation.out" . +cp "../run" . +cp "../../../1. Priming/OVERLAP10 no BC, no ABC/C/connections.txt" connections.txt +cp "../../../1. Priming/OVERLAP10 no BC, no ABC/C/"*/network_plots/*_net_3620.0.txt coupling_strengths.txt +cp "../../../../../../analysis/overlapParadigms.py" . +cp "../../../../../../analysis/utilityFunctions.py" . +cp "../../../../../../analysis/assemblyAvalancheStatistics.py" . +screen -d -m /bin/sh run + +cd ../.. diff --git a/simulation-bin/run_binary_paper2/priming_and_activation/3. Switching after 1 h/runner_OVERLAP10_A b/simulation-bin/run_binary_paper2/priming_and_activation/3. Switching after 1 h/runner_OVERLAP10_A new file mode 100644 index 0000000..8795053 --- /dev/null +++ b/simulation-bin/run_binary_paper2/priming_and_activation/3. Switching after 1 h/runner_OVERLAP10_A @@ -0,0 +1,13 @@ +#!/bin/sh + +cd OVERLAP10/A +cp "../../../../activation/net_activation.out" . +cp "../run" . +cp "../../../1. Priming/OVERLAP10/A/connections.txt" connections.txt +cp "../../../1. Priming/OVERLAP10/A/"*/network_plots/*_net_3620.0.txt coupling_strengths.txt +cp "../../../../../../analysis/overlapParadigms.py" . +cp "../../../../../../analysis/utilityFunctions.py" . +cp "../../../../../../analysis/assemblyAvalancheStatistics.py" . +screen -d -m /bin/sh run + +cd ../.. diff --git a/simulation-bin/run_binary_paper2/priming_and_activation/3. Switching after 1 h/runner_OVERLAP10_B b/simulation-bin/run_binary_paper2/priming_and_activation/3. Switching after 1 h/runner_OVERLAP10_B new file mode 100644 index 0000000..55eeef4 --- /dev/null +++ b/simulation-bin/run_binary_paper2/priming_and_activation/3. Switching after 1 h/runner_OVERLAP10_B @@ -0,0 +1,13 @@ +#!/bin/sh + +cd OVERLAP10/B +cp "../../../../activation/net_activation.out" . +cp "../run" . +cp "../../../1. Priming/OVERLAP10/B/connections.txt" connections.txt +cp "../../../1. Priming/OVERLAP10/B/"*/network_plots/*_net_3620.0.txt coupling_strengths.txt +cp "../../../../../../analysis/overlapParadigms.py" . +cp "../../../../../../analysis/utilityFunctions.py" . +cp "../../../../../../analysis/assemblyAvalancheStatistics.py" . +screen -d -m /bin/sh run + +cd ../.. diff --git a/simulation-bin/run_binary_paper2/priming_and_activation/3. Switching after 1 h/runner_OVERLAP10_C b/simulation-bin/run_binary_paper2/priming_and_activation/3. Switching after 1 h/runner_OVERLAP10_C new file mode 100644 index 0000000..d87d4cb --- /dev/null +++ b/simulation-bin/run_binary_paper2/priming_and_activation/3. Switching after 1 h/runner_OVERLAP10_C @@ -0,0 +1,13 @@ +#!/bin/sh + +cd OVERLAP10/C +cp "../../../../activation/net_activation.out" . +cp "../run" . +cp "../../../1. Priming/OVERLAP10/C/connections.txt" connections.txt +cp "../../../1. Priming/OVERLAP10/C/"*/network_plots/*_net_3620.0.txt coupling_strengths.txt +cp "../../../../../../analysis/overlapParadigms.py" . +cp "../../../../../../analysis/utilityFunctions.py" . +cp "../../../../../../analysis/assemblyAvalancheStatistics.py" . +screen -d -m /bin/sh run + +cd ../.. diff --git a/simulation-bin/run_binary_paper2/priming_and_activation/4. Switching after 4 h/NOOVERLAP/run b/simulation-bin/run_binary_paper2/priming_and_activation/4. Switching after 4 h/NOOVERLAP/run new file mode 100644 index 0000000..3c63a5b --- /dev/null +++ b/simulation-bin/run_binary_paper2/priming_and_activation/4. Switching after 4 h/NOOVERLAP/run @@ -0,0 +1,15 @@ +#!/bin/sh + +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." + +python3 assemblyAvalancheStatistics.py "NOOVERLAP" 0.01 10 False + diff --git a/simulation-bin/run_binary_paper2/priming_and_activation/4. Switching after 4 h/OVERLAP10 no AC, no ABC/run b/simulation-bin/run_binary_paper2/priming_and_activation/4. Switching after 4 h/OVERLAP10 no AC, no ABC/run new file mode 100644 index 0000000..406faf4 --- /dev/null +++ b/simulation-bin/run_binary_paper2/priming_and_activation/4. Switching after 4 h/OVERLAP10 no AC, no ABC/run @@ -0,0 +1,15 @@ +#!/bin/sh + +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." + +python3 assemblyAvalancheStatistics.py "OVERLAP10 no AC, no ABC" 0.01 10 False + diff --git a/simulation-bin/run_binary_paper2/priming_and_activation/4. Switching after 4 h/OVERLAP10 no BC, no ABC/run b/simulation-bin/run_binary_paper2/priming_and_activation/4. Switching after 4 h/OVERLAP10 no BC, no ABC/run new file mode 100644 index 0000000..0f2553e --- /dev/null +++ b/simulation-bin/run_binary_paper2/priming_and_activation/4. Switching after 4 h/OVERLAP10 no BC, no ABC/run @@ -0,0 +1,15 @@ +#!/bin/sh + +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." + +python3 assemblyAvalancheStatistics.py "OVERLAP10 no BC, no ABC" 0.01 10 False + diff --git a/simulation-bin/run_binary_paper2/priming_and_activation/4. Switching after 4 h/OVERLAP10/run b/simulation-bin/run_binary_paper2/priming_and_activation/4. Switching after 4 h/OVERLAP10/run new file mode 100644 index 0000000..d0ad253 --- /dev/null +++ b/simulation-bin/run_binary_paper2/priming_and_activation/4. Switching after 4 h/OVERLAP10/run @@ -0,0 +1,15 @@ +#!/bin/sh + +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." + +python3 assemblyAvalancheStatistics.py "OVERLAP10" 0.01 10 False + diff --git a/simulation-bin/run_binary_paper2/priming_and_activation/4. Switching after 4 h/runner_NOOVERLAP_A b/simulation-bin/run_binary_paper2/priming_and_activation/4. Switching after 4 h/runner_NOOVERLAP_A new file mode 100644 index 0000000..72ea5d0 --- /dev/null +++ b/simulation-bin/run_binary_paper2/priming_and_activation/4. Switching after 4 h/runner_NOOVERLAP_A @@ -0,0 +1,13 @@ +#!/bin/sh + +cd NOOVERLAP/A +cp "../../../../activation/net_activation.out" . +cp "../run" . +cp "../../../1. Priming/NOOVERLAP/A/connections.txt" connections.txt +cp "../../../1. Priming/NOOVERLAP/A/"*/network_plots/*_net_14420.0.txt coupling_strengths.txt +cp "../../../../../../analysis/overlapParadigms.py" . +cp "../../../../../../analysis/utilityFunctions.py" . +cp "../../../../../../analysis/assemblyAvalancheStatistics.py" . +screen -d -m /bin/sh run + +cd ../.. diff --git a/simulation-bin/run_binary_paper2/priming_and_activation/4. Switching after 4 h/runner_NOOVERLAP_B b/simulation-bin/run_binary_paper2/priming_and_activation/4. Switching after 4 h/runner_NOOVERLAP_B new file mode 100644 index 0000000..1026c24 --- /dev/null +++ b/simulation-bin/run_binary_paper2/priming_and_activation/4. Switching after 4 h/runner_NOOVERLAP_B @@ -0,0 +1,13 @@ +#!/bin/sh + +cd NOOVERLAP/B +cp "../../../../activation/net_activation.out" . +cp "../run" . +cp "../../../1. Priming/NOOVERLAP/B/connections.txt" connections.txt +cp "../../../1. Priming/NOOVERLAP/B/"*/network_plots/*_net_14420.0.txt coupling_strengths.txt +cp "../../../../../../../analysis/overlapParadigms.py" . +cp "../../../../../../../analysis/utilityFunctions.py" . +cp "../../../../../../../analysis/assemblyAvalancheStatistics.py" . +screen -d -m /bin/sh run + +cd ../.. diff --git a/simulation-bin/run_binary_paper2/priming_and_activation/4. Switching after 4 h/runner_NOOVERLAP_C b/simulation-bin/run_binary_paper2/priming_and_activation/4. Switching after 4 h/runner_NOOVERLAP_C new file mode 100644 index 0000000..342d1ba --- /dev/null +++ b/simulation-bin/run_binary_paper2/priming_and_activation/4. Switching after 4 h/runner_NOOVERLAP_C @@ -0,0 +1,13 @@ +#!/bin/sh + +cd NOOVERLAP/C +cp "../../../../activation/net_activation.out" . +cp "../run" . +cp "../../../1. Priming/NOOVERLAP/C/connections.txt" connections.txt +cp "../../../1. Priming/NOOVERLAP/C/"*/network_plots/*_net_14420.0.txt coupling_strengths.txt +cp "../../../../../../analysis/overlapParadigms.py" . +cp "../../../../../../analysis/utilityFunctions.py" . +cp "../../../../../../analysis/assemblyAvalancheStatistics.py" . +screen -d -m /bin/sh run + +cd ../.. diff --git a/simulation-bin/run_binary_paper2/priming_and_activation/4. Switching after 4 h/runner_OVERLAP10 no AC, no ABC_A b/simulation-bin/run_binary_paper2/priming_and_activation/4. Switching after 4 h/runner_OVERLAP10 no AC, no ABC_A new file mode 100644 index 0000000..40229b9 --- /dev/null +++ b/simulation-bin/run_binary_paper2/priming_and_activation/4. Switching after 4 h/runner_OVERLAP10 no AC, no ABC_A @@ -0,0 +1,13 @@ +#!/bin/sh + +cd "OVERLAP10 no AC, no ABC/A" +cp "../../../../activation/net_activation.out" . +cp "../run" . +cp "../../../1. Priming/OVERLAP10 no AC, no ABC/A/connections.txt" connections.txt +cp "../../../1. Priming/OVERLAP10 no AC, no ABC/A/"*/network_plots/*_net_14420.0.txt coupling_strengths.txt +cp "../../../../../../analysis/overlapParadigms.py" . +cp "../../../../../../analysis/utilityFunctions.py" . +cp "../../../../../../analysis/assemblyAvalancheStatistics.py" . +screen -d -m /bin/sh run + +cd ../.. diff --git a/simulation-bin/run_binary_paper2/priming_and_activation/4. Switching after 4 h/runner_OVERLAP10 no AC, no ABC_B b/simulation-bin/run_binary_paper2/priming_and_activation/4. Switching after 4 h/runner_OVERLAP10 no AC, no ABC_B new file mode 100644 index 0000000..48e50c4 --- /dev/null +++ b/simulation-bin/run_binary_paper2/priming_and_activation/4. Switching after 4 h/runner_OVERLAP10 no AC, no ABC_B @@ -0,0 +1,13 @@ +#!/bin/sh + +cd "OVERLAP10 no AC, no ABC/B" +cp "../../../../activation/net_activation.out" . +cp "../run" . +cp "../../../1. Priming/OVERLAP10 no AC, no ABC/B/connections.txt" connections.txt +cp "../../../1. Priming/OVERLAP10 no AC, no ABC/B/"*/network_plots/*_net_14420.0.txt coupling_strengths.txt +cp "../../../../../../analysis/overlapParadigms.py" . +cp "../../../../../../analysis/utilityFunctions.py" . +cp "../../../../../../analysis/assemblyAvalancheStatistics.py" . +screen -d -m /bin/sh run + +cd ../.. diff --git a/simulation-bin/run_binary_paper2/priming_and_activation/4. Switching after 4 h/runner_OVERLAP10 no AC, no ABC_C b/simulation-bin/run_binary_paper2/priming_and_activation/4. Switching after 4 h/runner_OVERLAP10 no AC, no ABC_C new file mode 100644 index 0000000..5a85a9f --- /dev/null +++ b/simulation-bin/run_binary_paper2/priming_and_activation/4. Switching after 4 h/runner_OVERLAP10 no AC, no ABC_C @@ -0,0 +1,13 @@ +#!/bin/sh + +cd "OVERLAP10 no AC, no ABC/C" +cp "../../../../activation/net_activation.out" . +cp "../run" . +cp "../../../1. Priming/OVERLAP10 no AC, no ABC/C/connections.txt" connections.txt +cp "../../../1. Priming/OVERLAP10 no AC, no ABC/C/"*/network_plots/*_net_14420.0.txt coupling_strengths.txt +cp "../../../../../../analysis/overlapParadigms.py" . +cp "../../../../../../analysis/utilityFunctions.py" . +cp "../../../../../../analysis/assemblyAvalancheStatistics.py" . +screen -d -m /bin/sh run + +cd ../.. diff --git a/simulation-bin/run_binary_paper2/priming_and_activation/4. Switching after 4 h/runner_OVERLAP10 no BC, no ABC_A b/simulation-bin/run_binary_paper2/priming_and_activation/4. Switching after 4 h/runner_OVERLAP10 no BC, no ABC_A new file mode 100644 index 0000000..b11afca --- /dev/null +++ b/simulation-bin/run_binary_paper2/priming_and_activation/4. Switching after 4 h/runner_OVERLAP10 no BC, no ABC_A @@ -0,0 +1,13 @@ +#!/bin/sh + +cd "OVERLAP10 no BC, no ABC/A" +cp "../../../../activation/net_activation.out" . +cp "../run" . +cp "../../../1. Priming/OVERLAP10 no BC, no ABC/A/connections.txt" connections.txt +cp "../../../1. Priming/OVERLAP10 no BC, no ABC/A/"*/network_plots/*_net_14420.0.txt coupling_strengths.txt +cp "../../../../../../analysis/overlapParadigms.py" . +cp "../../../../../../analysis/utilityFunctions.py" . +cp "../../../../../../analysis/assemblyAvalancheStatistics.py" . +screen -d -m /bin/sh run + +cd ../.. diff --git a/simulation-bin/run_binary_paper2/priming_and_activation/4. Switching after 4 h/runner_OVERLAP10 no BC, no ABC_B b/simulation-bin/run_binary_paper2/priming_and_activation/4. Switching after 4 h/runner_OVERLAP10 no BC, no ABC_B new file mode 100644 index 0000000..873bd93 --- /dev/null +++ b/simulation-bin/run_binary_paper2/priming_and_activation/4. Switching after 4 h/runner_OVERLAP10 no BC, no ABC_B @@ -0,0 +1,13 @@ +#!/bin/sh + +cd "OVERLAP10 no BC, no ABC/B" +cp "../../../../activation/net_activation.out" . +cp "../run" . +cp "../../../1. Priming/OVERLAP10 no BC, no ABC/B/connections.txt" connections.txt +cp "../../../1. Priming/OVERLAP10 no BC, no ABC/B/"*/network_plots/*_net_14420.0.txt coupling_strengths.txt +cp "../../../../../../analysis/overlapParadigms.py" . +cp "../../../../../../analysis/utilityFunctions.py" . +cp "../../../../../../analysis/assemblyAvalancheStatistics.py" . +screen -d -m /bin/sh run + +cd ../.. diff --git a/simulation-bin/run_binary_paper2/priming_and_activation/4. Switching after 4 h/runner_OVERLAP10 no BC, no ABC_C b/simulation-bin/run_binary_paper2/priming_and_activation/4. Switching after 4 h/runner_OVERLAP10 no BC, no ABC_C new file mode 100644 index 0000000..3cc1cce --- /dev/null +++ b/simulation-bin/run_binary_paper2/priming_and_activation/4. Switching after 4 h/runner_OVERLAP10 no BC, no ABC_C @@ -0,0 +1,13 @@ +#!/bin/sh + +cd "OVERLAP10 no BC, no ABC/C" +cp "../../../../activation/net_activation.out" . +cp "../run" . +cp "../../../1. Priming/OVERLAP10 no BC, no ABC/C/connections.txt" connections.txt +cp "../../../1. Priming/OVERLAP10 no BC, no ABC/C/"*/network_plots/*_net_14420.0.txt coupling_strengths.txt +cp "../../../../../../analysis/overlapParadigms.py" . +cp "../../../../../../analysis/utilityFunctions.py" . +cp "../../../../../../analysis/assemblyAvalancheStatistics.py" . +screen -d -m /bin/sh run + +cd ../.. diff --git a/simulation-bin/run_binary_paper2/priming_and_activation/4. Switching after 4 h/runner_OVERLAP10_A b/simulation-bin/run_binary_paper2/priming_and_activation/4. Switching after 4 h/runner_OVERLAP10_A new file mode 100644 index 0000000..1ec16e5 --- /dev/null +++ b/simulation-bin/run_binary_paper2/priming_and_activation/4. Switching after 4 h/runner_OVERLAP10_A @@ -0,0 +1,13 @@ +#!/bin/sh + +cd OVERLAP10/A +cp "../../../../activation/net_activation.out" . +cp "../run" . +cp "../../../1. Priming/OVERLAP10/A/connections.txt" connections.txt +cp "../../../1. Priming/OVERLAP10/A/"*/network_plots/*_net_14420.0.txt coupling_strengths.txt +cp "../../../../../../analysis/overlapParadigms.py" . +cp "../../../../../../analysis/utilityFunctions.py" . +cp "../../../../../../analysis/assemblyAvalancheStatistics.py" . +screen -d -m /bin/sh run + +cd ../.. diff --git a/simulation-bin/run_binary_paper2/priming_and_activation/4. Switching after 4 h/runner_OVERLAP10_B b/simulation-bin/run_binary_paper2/priming_and_activation/4. Switching after 4 h/runner_OVERLAP10_B new file mode 100644 index 0000000..876d867 --- /dev/null +++ b/simulation-bin/run_binary_paper2/priming_and_activation/4. Switching after 4 h/runner_OVERLAP10_B @@ -0,0 +1,13 @@ +#!/bin/sh + +cd OVERLAP10/B +cp "../../../../activation/net_activation.out" . +cp "../run" . +cp "../../../1. Priming/OVERLAP10/B/connections.txt" connections.txt +cp "../../../1. Priming/OVERLAP10/B/"*/network_plots/*_net_14420.0.txt coupling_strengths.txt +cp "../../../../../../analysis/overlapParadigms.py" . +cp "../../../../../../analysis/utilityFunctions.py" . +cp "../../../../../../analysis/assemblyAvalancheStatistics.py" . +screen -d -m /bin/sh run + +cd ../.. diff --git a/simulation-bin/run_binary_paper2/priming_and_activation/4. Switching after 4 h/runner_OVERLAP10_C b/simulation-bin/run_binary_paper2/priming_and_activation/4. Switching after 4 h/runner_OVERLAP10_C new file mode 100644 index 0000000..eafda39 --- /dev/null +++ b/simulation-bin/run_binary_paper2/priming_and_activation/4. Switching after 4 h/runner_OVERLAP10_C @@ -0,0 +1,13 @@ +#!/bin/sh + +cd OVERLAP10/C +cp "../../../../activation/net_activation.out" . +cp "../run" . +cp "../../../1. Priming/OVERLAP10/C/connections.txt" connections.txt +cp "../../../1. Priming/OVERLAP10/C/"*/network_plots/*_net_14420.0.txt coupling_strengths.txt +cp "../../../../../../analysis/overlapParadigms.py" . +cp "../../../../../../analysis/utilityFunctions.py" . +cp "../../../../../../analysis/assemblyAvalancheStatistics.py" . +screen -d -m /bin/sh run + +cd ../.. diff --git a/simulation-bin/run_binary_paper2/priming_and_activation/5. Switching after 7 h/NOOVERLAP/run b/simulation-bin/run_binary_paper2/priming_and_activation/5. Switching after 7 h/NOOVERLAP/run new file mode 100644 index 0000000..3c63a5b --- /dev/null +++ b/simulation-bin/run_binary_paper2/priming_and_activation/5. Switching after 7 h/NOOVERLAP/run @@ -0,0 +1,15 @@ +#!/bin/sh + +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." + +python3 assemblyAvalancheStatistics.py "NOOVERLAP" 0.01 10 False + diff --git a/simulation-bin/run_binary_paper2/priming_and_activation/5. Switching after 7 h/OVERLAP10 no AC, no ABC/run b/simulation-bin/run_binary_paper2/priming_and_activation/5. Switching after 7 h/OVERLAP10 no AC, no ABC/run new file mode 100644 index 0000000..406faf4 --- /dev/null +++ b/simulation-bin/run_binary_paper2/priming_and_activation/5. Switching after 7 h/OVERLAP10 no AC, no ABC/run @@ -0,0 +1,15 @@ +#!/bin/sh + +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." + +python3 assemblyAvalancheStatistics.py "OVERLAP10 no AC, no ABC" 0.01 10 False + diff --git a/simulation-bin/run_binary_paper2/priming_and_activation/5. Switching after 7 h/OVERLAP10 no BC, no ABC/run b/simulation-bin/run_binary_paper2/priming_and_activation/5. Switching after 7 h/OVERLAP10 no BC, no ABC/run new file mode 100644 index 0000000..0f2553e --- /dev/null +++ b/simulation-bin/run_binary_paper2/priming_and_activation/5. Switching after 7 h/OVERLAP10 no BC, no ABC/run @@ -0,0 +1,15 @@ +#!/bin/sh + +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." + +python3 assemblyAvalancheStatistics.py "OVERLAP10 no BC, no ABC" 0.01 10 False + diff --git a/simulation-bin/run_binary_paper2/priming_and_activation/5. Switching after 7 h/OVERLAP10/run b/simulation-bin/run_binary_paper2/priming_and_activation/5. Switching after 7 h/OVERLAP10/run new file mode 100644 index 0000000..d0ad253 --- /dev/null +++ b/simulation-bin/run_binary_paper2/priming_and_activation/5. Switching after 7 h/OVERLAP10/run @@ -0,0 +1,15 @@ +#!/bin/sh + +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." +./net_activation.out -Nl=50 -Nl_inh=25 -t_max=180 -N_stim=25 -pc=0.1 -learn= -recall= -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="Spont. act." + +python3 assemblyAvalancheStatistics.py "OVERLAP10" 0.01 10 False + diff --git a/simulation-bin/run_binary_paper2/priming_and_activation/5. Switching after 7 h/runner_NOOVERLAP_A b/simulation-bin/run_binary_paper2/priming_and_activation/5. Switching after 7 h/runner_NOOVERLAP_A new file mode 100644 index 0000000..fe8bd3a --- /dev/null +++ b/simulation-bin/run_binary_paper2/priming_and_activation/5. Switching after 7 h/runner_NOOVERLAP_A @@ -0,0 +1,13 @@ +#!/bin/sh + +cd NOOVERLAP/A +cp "../../../../activation/net_activation.out" . +cp "../run" . +cp "../../../1. Priming/NOOVERLAP/A/connections.txt" connections.txt +cp "../../../1. Priming/NOOVERLAP/A/"*/network_plots/*_net_25220.0.txt coupling_strengths.txt +cp "../../../../../../analysis/overlapParadigms.py" . +cp "../../../../../../analysis/utilityFunctions.py" . +cp "../../../../../../analysis/assemblyAvalancheStatistics.py" . +screen -d -m /bin/sh run + +cd ../.. diff --git a/simulation-bin/run_binary_paper2/priming_and_activation/5. Switching after 7 h/runner_NOOVERLAP_B b/simulation-bin/run_binary_paper2/priming_and_activation/5. Switching after 7 h/runner_NOOVERLAP_B new file mode 100644 index 0000000..589d17b --- /dev/null +++ b/simulation-bin/run_binary_paper2/priming_and_activation/5. Switching after 7 h/runner_NOOVERLAP_B @@ -0,0 +1,13 @@ +#!/bin/sh + +cd NOOVERLAP/B +cp "../../../../activation/net_activation.out" . +cp "../run" . +cp "../../../1. Priming/NOOVERLAP/B/connections.txt" connections.txt +cp "../../../1. Priming/NOOVERLAP/B/"*/network_plots/*_net_25220.0.txt coupling_strengths.txt +cp "../../../../../../analysis/overlapParadigms.py" . +cp "../../../../../../analysis/utilityFunctions.py" . +cp "../../../../../../analysis/assemblyAvalancheStatistics.py" . +screen -d -m /bin/sh run + +cd ../.. diff --git a/simulation-bin/run_binary_paper2/priming_and_activation/5. Switching after 7 h/runner_NOOVERLAP_C b/simulation-bin/run_binary_paper2/priming_and_activation/5. Switching after 7 h/runner_NOOVERLAP_C new file mode 100644 index 0000000..baccf42 --- /dev/null +++ b/simulation-bin/run_binary_paper2/priming_and_activation/5. Switching after 7 h/runner_NOOVERLAP_C @@ -0,0 +1,13 @@ +#!/bin/sh + +cd NOOVERLAP/C +cp "../../../../activation/net_activation.out" . +cp "../run" . +cp "../../../1. Priming/NOOVERLAP/C/connections.txt" connections.txt +cp "../../../1. Priming/NOOVERLAP/C/"*/network_plots/*_net_25220.0.txt coupling_strengths.txt +cp "../../../../../../analysis/overlapParadigms.py" . +cp "../../../../../../analysis/utilityFunctions.py" . +cp "../../../../../../analysis/assemblyAvalancheStatistics.py" . +screen -d -m /bin/sh run + +cd ../.. diff --git a/simulation-bin/run_binary_paper2/priming_and_activation/5. Switching after 7 h/runner_OVERLAP10 no AC, no ABC_A b/simulation-bin/run_binary_paper2/priming_and_activation/5. Switching after 7 h/runner_OVERLAP10 no AC, no ABC_A new file mode 100644 index 0000000..62c4865 --- /dev/null +++ b/simulation-bin/run_binary_paper2/priming_and_activation/5. Switching after 7 h/runner_OVERLAP10 no AC, no ABC_A @@ -0,0 +1,13 @@ +#!/bin/sh + +cd "OVERLAP10 no AC, no ABC/A" +cp "../../../../activation/net_activation.out" . +cp "../run" . +cp "../../../1. Priming/OVERLAP10 no AC, no ABC/A/connections.txt" connections.txt +cp "../../../1. Priming/OVERLAP10 no AC, no ABC/A/"*/network_plots/*_net_25220.0.txt coupling_strengths.txt +cp "../../../../../../analysis/overlapParadigms.py" . +cp "../../../../../../analysis/utilityFunctions.py" . +cp "../../../../../../analysis/assemblyAvalancheStatistics.py" . +screen -d -m /bin/sh run + +cd ../.. diff --git a/simulation-bin/run_binary_paper2/priming_and_activation/5. Switching after 7 h/runner_OVERLAP10 no AC, no ABC_B b/simulation-bin/run_binary_paper2/priming_and_activation/5. Switching after 7 h/runner_OVERLAP10 no AC, no ABC_B new file mode 100644 index 0000000..32d6a20 --- /dev/null +++ b/simulation-bin/run_binary_paper2/priming_and_activation/5. Switching after 7 h/runner_OVERLAP10 no AC, no ABC_B @@ -0,0 +1,13 @@ +#!/bin/sh + +cd "OVERLAP10 no AC, no ABC/B" +cp "../../../../activation/net_activation.out" . +cp "../run" . +cp "../../../1. Priming/OVERLAP10 no AC, no ABC/B/connections.txt" connections.txt +cp "../../../1. Priming/OVERLAP10 no AC, no ABC/B/"*/network_plots/*_net_25220.0.txt coupling_strengths.txt +cp "../../../../../../analysis/overlapParadigms.py" . +cp "../../../../../../analysis/utilityFunctions.py" . +cp "../../../../../../analysis/assemblyAvalancheStatistics.py" . +screen -d -m /bin/sh run + +cd ../.. diff --git a/simulation-bin/run_binary_paper2/priming_and_activation/5. Switching after 7 h/runner_OVERLAP10 no AC, no ABC_C b/simulation-bin/run_binary_paper2/priming_and_activation/5. Switching after 7 h/runner_OVERLAP10 no AC, no ABC_C new file mode 100644 index 0000000..c3dfae0 --- /dev/null +++ b/simulation-bin/run_binary_paper2/priming_and_activation/5. Switching after 7 h/runner_OVERLAP10 no AC, no ABC_C @@ -0,0 +1,13 @@ +#!/bin/sh + +cd "OVERLAP10 no AC, no ABC/C" +cp "../../../../activation/net_activation.out" . +cp "../run" . +cp "../../../1. Priming/OVERLAP10 no AC, no ABC/C/connections.txt" connections.txt +cp "../../../1. Priming/OVERLAP10 no AC, no ABC/C/"*/network_plots/*_net_25220.0.txt coupling_strengths.txt +cp "../../../../../../analysis/overlapParadigms.py" . +cp "../../../../../../analysis/utilityFunctions.py" . +cp "../../../../../../analysis/assemblyAvalancheStatistics.py" . +screen -d -m /bin/sh run + +cd ../.. diff --git a/simulation-bin/run_binary_paper2/priming_and_activation/5. Switching after 7 h/runner_OVERLAP10 no BC, no ABC_A b/simulation-bin/run_binary_paper2/priming_and_activation/5. Switching after 7 h/runner_OVERLAP10 no BC, no ABC_A new file mode 100644 index 0000000..7cca4f3 --- /dev/null +++ b/simulation-bin/run_binary_paper2/priming_and_activation/5. Switching after 7 h/runner_OVERLAP10 no BC, no ABC_A @@ -0,0 +1,13 @@ +#!/bin/sh + +cd "OVERLAP10 no BC, no ABC/A" +cp "../../../../activation/net_activation.out" . +cp "../run" . +cp "../../../1. Priming/OVERLAP10 no BC, no ABC/A/connections.txt" connections.txt +cp "../../../1. Priming/OVERLAP10 no BC, no ABC/A/"*/network_plots/*_net_25220.0.txt coupling_strengths.txt +cp "../../../../../../analysis/overlapParadigms.py" . +cp "../../../../../../analysis/utilityFunctions.py" . +cp "../../../../../../analysis/assemblyAvalancheStatistics.py" . +screen -d -m /bin/sh run + +cd ../.. diff --git a/simulation-bin/run_binary_paper2/priming_and_activation/5. Switching after 7 h/runner_OVERLAP10 no BC, no ABC_B b/simulation-bin/run_binary_paper2/priming_and_activation/5. Switching after 7 h/runner_OVERLAP10 no BC, no ABC_B new file mode 100644 index 0000000..5de5b1a --- /dev/null +++ b/simulation-bin/run_binary_paper2/priming_and_activation/5. Switching after 7 h/runner_OVERLAP10 no BC, no ABC_B @@ -0,0 +1,13 @@ +#!/bin/sh + +cd "OVERLAP10 no BC, no ABC/B" +cp "../../../../activation/net_activation.out" . +cp "../run" . +cp "../../../1. Priming/OVERLAP10 no BC, no ABC/B/connections.txt" connections.txt +cp "../../../1. Priming/OVERLAP10 no BC, no ABC/B/"*/network_plots/*_net_25220.0.txt coupling_strengths.txt +cp "../../../../../../analysis/overlapParadigms.py" . +cp "../../../../../../analysis/utilityFunctions.py" . +cp "../../../../../../analysis/assemblyAvalancheStatistics.py" . +screen -d -m /bin/sh run + +cd ../.. diff --git a/simulation-bin/run_binary_paper2/priming_and_activation/5. Switching after 7 h/runner_OVERLAP10 no BC, no ABC_C b/simulation-bin/run_binary_paper2/priming_and_activation/5. Switching after 7 h/runner_OVERLAP10 no BC, no ABC_C new file mode 100644 index 0000000..5bb6875 --- /dev/null +++ b/simulation-bin/run_binary_paper2/priming_and_activation/5. Switching after 7 h/runner_OVERLAP10 no BC, no ABC_C @@ -0,0 +1,13 @@ +#!/bin/sh + +cd "OVERLAP10 no BC, no ABC/C" +cp "../../../../activation/net_activation.out" . +cp "../run" . +cp "../../../1. Priming/OVERLAP10 no BC, no ABC/C/connections.txt" connections.txt +cp "../../../1. Priming/OVERLAP10 no BC, no ABC/C/"*/network_plots/*_net_25220.0.txt coupling_strengths.txt +cp "../../../../../../analysis/overlapParadigms.py" . +cp "../../../../../../analysis/utilityFunctions.py" . +cp "../../../../../../analysis/assemblyAvalancheStatistics.py" . +screen -d -m /bin/sh run + +cd ../.. diff --git a/simulation-bin/run_binary_paper2/priming_and_activation/5. Switching after 7 h/runner_OVERLAP10_A b/simulation-bin/run_binary_paper2/priming_and_activation/5. Switching after 7 h/runner_OVERLAP10_A new file mode 100644 index 0000000..c80c8b8 --- /dev/null +++ b/simulation-bin/run_binary_paper2/priming_and_activation/5. Switching after 7 h/runner_OVERLAP10_A @@ -0,0 +1,13 @@ +#!/bin/sh + +cd OVERLAP10/A +cp "../../../../activation/net_activation.out" . +cp "../run" . +cp "../../../1. Priming/OVERLAP10/A/connections.txt" connections.txt +cp "../../../1. Priming/OVERLAP10/A/"*/network_plots/*_net_25220.0.txt coupling_strengths.txt +cp "../../../../../../analysis/overlapParadigms.py" . +cp "../../../../../../analysis/utilityFunctions.py" . +cp "../../../../../../analysis/assemblyAvalancheStatistics.py" . +screen -d -m /bin/sh run + +cd ../.. diff --git a/simulation-bin/run_binary_paper2/priming_and_activation/5. Switching after 7 h/runner_OVERLAP10_B b/simulation-bin/run_binary_paper2/priming_and_activation/5. Switching after 7 h/runner_OVERLAP10_B new file mode 100644 index 0000000..8540c3d --- /dev/null +++ b/simulation-bin/run_binary_paper2/priming_and_activation/5. Switching after 7 h/runner_OVERLAP10_B @@ -0,0 +1,13 @@ +#!/bin/sh + +cd OVERLAP10/B +cp "../../../../activation/net_activation.out" . +cp "../run" . +cp "../../../1. Priming/OVERLAP10/B/connections.txt" connections.txt +cp "../../../1. Priming/OVERLAP10/B/"*/network_plots/*_net_25220.0.txt coupling_strengths.txt +cp "../../../../../../analysis/overlapParadigms.py" . +cp "../../../../../../analysis/utilityFunctions.py" . +cp "../../../../../../analysis/assemblyAvalancheStatistics.py" . +screen -d -m /bin/sh run + +cd ../.. diff --git a/simulation-bin/run_binary_paper2/priming_and_activation/5. Switching after 7 h/runner_OVERLAP10_C b/simulation-bin/run_binary_paper2/priming_and_activation/5. Switching after 7 h/runner_OVERLAP10_C new file mode 100644 index 0000000..f063f66 --- /dev/null +++ b/simulation-bin/run_binary_paper2/priming_and_activation/5. Switching after 7 h/runner_OVERLAP10_C @@ -0,0 +1,13 @@ +#!/bin/sh + +cd OVERLAP10/C +cp "../../../../activation/net_activation.out" . +cp "../run" . +cp "../../../1. Priming/OVERLAP10/C/connections.txt" connections.txt +cp "../../../1. Priming/OVERLAP10/C/"*/network_plots/*_net_25220.0.txt coupling_strengths.txt +cp "../../../../../../analysis/overlapParadigms.py" . +cp "../../../../../../analysis/utilityFunctions.py" . +cp "../../../../../../analysis/assemblyAvalancheStatistics.py" . +screen -d -m /bin/sh run + +cd ../.. diff --git a/simulation-bin/run_binary_paper2/priming_and_activation/averageFileColumnsAdvancedMod.py b/simulation-bin/run_binary_paper2/priming_and_activation/averageFileColumnsAdvancedMod.py new file mode 100755 index 0000000..9836309 --- /dev/null +++ b/simulation-bin/run_binary_paper2/priming_and_activation/averageFileColumnsAdvancedMod.py @@ -0,0 +1,269 @@ +############################################################################################## +### Script to average data from the same columns in data files stored in different folders ### +############################################################################################## + +### Copyright 2017-2021 Jannik Luboeinski +### licensed under Apache-2.0 (http://www.apache.org/licenses/LICENSE-2.0) + +import numpy as np +import os +from pathlib import Path +from mergeRawData import * + +# averageFileColumns +# Averages specified data columns across data files located in directories which names contain a specific string +# and computes the standard deviation +# outname: name of the file to write the averaged data to +# rootpath: path in which to look for data folders +# protocol: string that the data folders have to contain +# suffix: suffix in the filename of data files to be read +# columns: list of numbers of the columns in the data file to be read and averaged (e.g., [1, 3] for first and third column) +# first_column_par [optional]: indicates if first column is to be treated as parameter (e.g., time) - it is then added regardless of 'columns' +# comment_line [optional]: if True, leaves out the first line +def averageFileColumns(outname, rootpath, protocol, suffix, columns, first_column_par=True, comment_line=False): + print("Averaging columns " + str(columns) + " from files matching '*" + suffix + "' in folders of the protocol '" + protocol + "'...") + sample_number = 0 + col_sep = '= ' #'\t\t' character(s) separating the columns + + # find the folders with the protocol in their name + rawpaths = Path(rootpath) + paths = np.array([str(x) for x in rawpaths.iterdir() if x.is_dir() and protocol in str(x)]) + + if paths.size == 0: + raise FileNotFoundError("No folders found that contain the string '" + protocol + "' in their name.") + print("According folders found:\n", paths) + + # read data and average + # loop over directories + for i in range(paths.size): + + # find the files with the suffix in their name + subrawpaths = Path(paths[i]) + subpaths = np.array([str(x) for x in subrawpaths.iterdir() if str(x).find(suffix) >= len(str(x))-len(suffix)]) + + if subpaths.size == 0: + raise FileNotFoundError("No files found matching '*" + suffix + "' in '" + paths[i] + "'.") + + print("According files found in '" + paths[i] + "':\n", subpaths) + sample_number += subpaths.size + + # loop over files in each directory + for j in range(subpaths.size): + + with open(subpaths[j]) as f: + rawdata = f.read() + + rawdata = rawdata.split('\n') + if comment_line: + del rawdata[0] # leave out comment line + if rawdata[-1] == "": + del rawdata[-1] # delete empty line + + if i == 0 and j == 0: # first file found: read number of rows and create data arrays + num_rows = len(rawdata) + num_cols = len(columns) + time = np.zeros(num_rows) + data = np.zeros((num_rows, num_cols)) + data_var = np.zeros((num_rows, num_cols)) + elif num_rows != len(rawdata): + raise IndexError("In '" + subpaths[j] + "': wrong number of rows: " + str(len(rawdata)-1) + " (" + str(num_rows) + " expected).") + + for k in range(num_rows): + values = rawdata[k].split(col_sep) + if len(values) < 2: + values = [np.nan,np.nan] # to avoid problems reading descriptions + try: + time[k] += np.double(values[0]) # read first/parameter column + except ValueError: + pass#print("Computing mean: conversion error in line " + str(k+1) + ", column 1\n\tin '" + subpaths[j] + "'.") + for l in range(num_cols): + try: + data[k][l] += np.double(values[columns[l]-1]) # read data columns + except ValueError: + pass#print("Computing mean: conversion error in line " + str(k+1) + ", column " + str(columns[l]) + "\n\tin '" + subpaths[j] + "'.") + + f.close() + + time = time / sample_number + data = data / sample_number + + # read data and compute variance + # loop over directories + for i in range(paths.size): + + # loop over files in each directory + for j in range(subpaths.size): + + with open(subpaths[j]) as f: + rawdata = f.read() + + rawdata = rawdata.split('\n') + + if comment_line: + del rawdata[0] # leave out comment line + if rawdata[-1] == "": + del rawdata[-1] # delete empty line + + for k in range(num_rows): + values = rawdata[k].split(col_sep) + #if len(values) < 2: + # values = [np.nan,np.nan] # to avoid problems reading descriptions + for l in range(num_cols): + try: + data_var[k][l] += np.power(np.double(values[columns[l]-1])-data[k][l], 2) # read data columns + + except: + pass#print("Computing variance: conversion error in line " + str(k+1) + ", column " + str(columns[l]) + "\n\tin '" + subpaths[j] + "'.") + #except IndexError: + # print("INDEX ERROR") + + f.close() + + data_stdev = np.sqrt(data_var / (sample_number - 1)) + + # write averaged data + fout = open(outname + '.txt', 'w') + + for k in range(num_rows): ## ADAPTED + + if k >=4 and k <=7: # only need those four rows! + for l in range(num_cols): + fout.write(str(data[k][l]) + "\t" + str(data_stdev[k][l])) + if (k+1) % 4 == 0 and l >= num_cols-1: # after the last column and after 4 rows have been clutched together + fout.write("\n") + else: # as long as last column is not yet reached + fout.write("\t") + fout.close() + +f = open("p_act_summary_temp_0names.txt", "w") +f.write("NOOVERLAP, A primed\n") +f.write("NOOVERLAP, B primed\n") +f.write("NOOVERLAP, C primed\n") +f.write("OVERLAP10, A primed\n") +f.write("OVERLAP10, B primed\n") +f.write("OVERLAP10, C primed\n") +f.write("OVERLAP10 no AC, no ABC, A primed\n") +f.write("OVERLAP10 no AC, no ABC, B primed\n") +f.write("OVERLAP10 no AC, no ABC, C primed\n") +f.write("OVERLAP10 no BC, no ABC, A primed\n") +f.write("OVERLAP10 no BC, no ABC, B primed\n") +f.write("OVERLAP10 no BC, no ABC, C primed\n") +f.close() + +# 10 min +averageFileColumns("p_act_averaged_Aprimed", "2. Switching after 10 min/NOOVERLAP/A", "avalanche_statistics_0.01_10", "_CA_probabilities.txt", [2], first_column_par=False) +averageFileColumns("p_act_averaged_Bprimed", "2. Switching after 10 min/NOOVERLAP/B", "avalanche_statistics_0.01_10", "_CA_probabilities.txt", [2], first_column_par=False) +averageFileColumns("p_act_averaged_Cprimed", "2. Switching after 10 min/NOOVERLAP/C", "avalanche_statistics_0.01_10", "_CA_probabilities.txt", [2], first_column_par=False) +mergeRawData(".", "p_act_averaged_", "p_act_10min_NOOVERLAP.txt", remove_raw=True, sep_str='\n') + +averageFileColumns("p_act_averaged_Aprimed", "2. Switching after 10 min/OVERLAP10/A", "avalanche_statistics_0.01_10", "_CA_probabilities.txt", [2], first_column_par=False) +averageFileColumns("p_act_averaged_Bprimed", "2. Switching after 10 min/OVERLAP10/B", "avalanche_statistics_0.01_10", "_CA_probabilities.txt", [2], first_column_par=False) +averageFileColumns("p_act_averaged_Cprimed", "2. Switching after 10 min/OVERLAP10/C", "avalanche_statistics_0.01_10", "_CA_probabilities.txt", [2], first_column_par=False) +mergeRawData(".", "p_act_averaged_", "p_act_10min_OVERLAP10.txt", remove_raw=True, sep_str='\n') + +averageFileColumns("p_act_averaged_Aprimed", "2. Switching after 10 min/OVERLAP10 no AC, no ABC/A", "avalanche_statistics_0.01_10", "_CA_probabilities.txt", [2], first_column_par=False) +averageFileColumns("p_act_averaged_Bprimed", "2. Switching after 10 min/OVERLAP10 no AC, no ABC/B", "avalanche_statistics_0.01_10", "_CA_probabilities.txt", [2], first_column_par=False) +averageFileColumns("p_act_averaged_Cprimed", "2. Switching after 10 min/OVERLAP10 no AC, no ABC/C", "avalanche_statistics_0.01_10", "_CA_probabilities.txt", [2], first_column_par=False) +mergeRawData(".", "p_act_averaged_", "p_act_10min_OVERLAP10_noAC_noABC.txt", remove_raw=True, sep_str='\n') + +averageFileColumns("p_act_averaged_Aprimed", "2. Switching after 10 min/OVERLAP10 no BC, no ABC/A", "avalanche_statistics_0.01_10", "_CA_probabilities.txt", [2], first_column_par=False) +averageFileColumns("p_act_averaged_Bprimed", "2. Switching after 10 min/OVERLAP10 no BC, no ABC/B", "avalanche_statistics_0.01_10", "_CA_probabilities.txt", [2], first_column_par=False) +averageFileColumns("p_act_averaged_Cprimed", "2. Switching after 10 min/OVERLAP10 no BC, no ABC/C", "avalanche_statistics_0.01_10", "_CA_probabilities.txt", [2], first_column_par=False) +mergeRawData(".", "p_act_averaged_", "p_act_10min_OVERLAP10_noBC_noABC.txt", remove_raw=True, sep_str='\n') + +os.system('cat "p_act_10min_NOOVERLAP.txt" > "p_act_summary_temp_10min.txt"') +os.system('cat "p_act_10min_OVERLAP10.txt" >> "p_act_summary_temp_10min.txt"') +os.system('cat "p_act_10min_OVERLAP10_noAC_noABC.txt" >> "p_act_summary_temp_10min.txt"') +os.system('cat "p_act_10min_OVERLAP10_noBC_noABC.txt" >> "p_act_summary_temp_10min.txt"') +os.system('rm -R -f "p_act_10min"*') +mergeRawData(".", "p_act_summary_temp_", "p_act_summary_10min.txt", remove_raw=False, sep_str='\t') # remove_raw=False to keep _temp_0names file +os.system('rm -f p_act_summary_temp_10min.txt') + +# 1 h +averageFileColumns("p_act_averaged_Aprimed", "3. Switching after 1 h/NOOVERLAP/A", "avalanche_statistics_0.01_10", "_CA_probabilities.txt", [2], first_column_par=False) +averageFileColumns("p_act_averaged_Bprimed", "3. Switching after 1 h/NOOVERLAP/B", "avalanche_statistics_0.01_10", "_CA_probabilities.txt", [2], first_column_par=False) +averageFileColumns("p_act_averaged_Cprimed", "3. Switching after 1 h/NOOVERLAP/C", "avalanche_statistics_0.01_10", "_CA_probabilities.txt", [2], first_column_par=False) +mergeRawData(".", "p_act_averaged_", "p_act_1h_NOOVERLAP.txt", remove_raw=True, sep_str='\n') + +averageFileColumns("p_act_averaged_Aprimed", "3. Switching after 1 h/OVERLAP10/A", "avalanche_statistics_0.01_10", "_CA_probabilities.txt", [2], first_column_par=False) +averageFileColumns("p_act_averaged_Bprimed", "3. Switching after 1 h/OVERLAP10/B", "avalanche_statistics_0.01_10", "_CA_probabilities.txt", [2], first_column_par=False) +averageFileColumns("p_act_averaged_Cprimed", "3. Switching after 1 h/OVERLAP10/C", "avalanche_statistics_0.01_10", "_CA_probabilities.txt", [2], first_column_par=False) +mergeRawData(".", "p_act_averaged_", "p_act_1h_OVERLAP10.txt", remove_raw=True, sep_str='\n') + +averageFileColumns("p_act_averaged_Aprimed", "3. Switching after 1 h/OVERLAP10 no AC, no ABC/A", "avalanche_statistics_0.01_10", "_CA_probabilities.txt", [2], first_column_par=False) +averageFileColumns("p_act_averaged_Bprimed", "3. Switching after 1 h/OVERLAP10 no AC, no ABC/B", "avalanche_statistics_0.01_10", "_CA_probabilities.txt", [2], first_column_par=False) +averageFileColumns("p_act_averaged_Cprimed", "3. Switching after 1 h/OVERLAP10 no AC, no ABC/C", "avalanche_statistics_0.01_10", "_CA_probabilities.txt", [2], first_column_par=False) +mergeRawData(".", "p_act_averaged_", "p_act_1h_OVERLAP10_noAC_noABC.txt", remove_raw=True, sep_str='\n') + +averageFileColumns("p_act_averaged_Aprimed", "3. Switching after 1 h/OVERLAP10 no BC, no ABC/A", "avalanche_statistics_0.01_10", "_CA_probabilities.txt", [2], first_column_par=False) +averageFileColumns("p_act_averaged_Bprimed", "3. Switching after 1 h/OVERLAP10 no BC, no ABC/B", "avalanche_statistics_0.01_10", "_CA_probabilities.txt", [2], first_column_par=False) +averageFileColumns("p_act_averaged_Cprimed", "3. Switching after 1 h/OVERLAP10 no BC, no ABC/C", "avalanche_statistics_0.01_10", "_CA_probabilities.txt", [2], first_column_par=False) +mergeRawData(".", "p_act_averaged_", "p_act_1h_OVERLAP10_noBC_noABC.txt", remove_raw=True, sep_str='\n') + +os.system('cat "p_act_1h_NOOVERLAP.txt" > "p_act_summary_temp_1h.txt"') +os.system('cat "p_act_1h_OVERLAP10.txt" >> "p_act_summary_temp_1h.txt"') +os.system('cat "p_act_1h_OVERLAP10_noAC_noABC.txt" >> "p_act_summary_temp_1h.txt"') +os.system('cat "p_act_1h_OVERLAP10_noBC_noABC.txt" >> "p_act_summary_temp_1h.txt"') +os.system('rm -R -f "p_act_1h"*') +mergeRawData(".", "p_act_summary_temp_", "p_act_summary_1h.txt", remove_raw=False, sep_str='\t') +os.system('rm -f p_act_summary_temp_1h.txt') + +# 4 h +averageFileColumns("p_act_averaged_Aprimed", "4. Switching after 4 h/NOOVERLAP/A", "avalanche_statistics_0.01_10", "_CA_probabilities.txt", [2], first_column_par=False) +averageFileColumns("p_act_averaged_Bprimed", "4. Switching after 4 h/NOOVERLAP/B", "avalanche_statistics_0.01_10", "_CA_probabilities.txt", [2], first_column_par=False) +averageFileColumns("p_act_averaged_Cprimed", "4. Switching after 4 h/NOOVERLAP/C", "avalanche_statistics_0.01_10", "_CA_probabilities.txt", [2], first_column_par=False) +mergeRawData(".", "p_act_averaged_", "p_act_4h_NOOVERLAP.txt", remove_raw=True, sep_str='\n') + +averageFileColumns("p_act_averaged_Aprimed", "4. Switching after 4 h/OVERLAP10/A", "avalanche_statistics_0.01_10", "_CA_probabilities.txt", [2], first_column_par=False) +averageFileColumns("p_act_averaged_Bprimed", "4. Switching after 4 h/OVERLAP10/B", "avalanche_statistics_0.01_10", "_CA_probabilities.txt", [2], first_column_par=False) +averageFileColumns("p_act_averaged_Cprimed", "4. Switching after 4 h/OVERLAP10/C", "avalanche_statistics_0.01_10", "_CA_probabilities.txt", [2], first_column_par=False) +mergeRawData(".", "p_act_averaged_", "p_act_4h_OVERLAP10.txt", remove_raw=True, sep_str='\n') + +averageFileColumns("p_act_averaged_Aprimed", "4. Switching after 4 h/OVERLAP10 no AC, no ABC/A", "avalanche_statistics_0.01_10", "_CA_probabilities.txt", [2], first_column_par=False) +averageFileColumns("p_act_averaged_Bprimed", "4. Switching after 4 h/OVERLAP10 no AC, no ABC/B", "avalanche_statistics_0.01_10", "_CA_probabilities.txt", [2], first_column_par=False) +averageFileColumns("p_act_averaged_Cprimed", "4. Switching after 4 h/OVERLAP10 no AC, no ABC/C", "avalanche_statistics_0.01_10", "_CA_probabilities.txt", [2], first_column_par=False) +mergeRawData(".", "p_act_averaged_", "p_act_4h_OVERLAP10_noAC_noABC.txt", remove_raw=True, sep_str='\n') + +averageFileColumns("p_act_averaged_Aprimed", "4. Switching after 4 h/OVERLAP10 no BC, no ABC/A", "avalanche_statistics_0.01_10", "_CA_probabilities.txt", [2], first_column_par=False) +averageFileColumns("p_act_averaged_Bprimed", "4. Switching after 4 h/OVERLAP10 no BC, no ABC/B", "avalanche_statistics_0.01_10", "_CA_probabilities.txt", [2], first_column_par=False) +averageFileColumns("p_act_averaged_Cprimed", "4. Switching after 4 h/OVERLAP10 no BC, no ABC/C", "avalanche_statistics_0.01_10", "_CA_probabilities.txt", [2], first_column_par=False) +mergeRawData(".", "p_act_averaged_", "p_act_4h_OVERLAP10_noBC_noABC.txt", remove_raw=True, sep_str='\n') + +os.system('cat "p_act_4h_NOOVERLAP.txt" > "p_act_summary_temp_4h.txt"') +os.system('cat "p_act_4h_OVERLAP10.txt" >> "p_act_summary_temp_4h.txt"') +os.system('cat "p_act_4h_OVERLAP10_noAC_noABC.txt" >> "p_act_summary_temp_4h.txt"') +os.system('cat "p_act_4h_OVERLAP10_noBC_noABC.txt" >> "p_act_summary_temp_4h.txt"') +os.system('rm -R -f "p_act_4h"*') +mergeRawData(".", "p_act_summary_temp_", "p_act_summary_4h.txt", remove_raw=False, sep_str='\t') +os.system('rm -f p_act_summary_temp_4h.txt') + +# 7 h +averageFileColumns("p_act_averaged_Aprimed", "5. Switching after 7 h/NOOVERLAP/A", "avalanche_statistics_0.01_10", "_CA_probabilities.txt", [2], first_column_par=False) +averageFileColumns("p_act_averaged_Bprimed", "5. Switching after 7 h/NOOVERLAP/B", "avalanche_statistics_0.01_10", "_CA_probabilities.txt", [2], first_column_par=False) +averageFileColumns("p_act_averaged_Cprimed", "5. Switching after 7 h/NOOVERLAP/C", "avalanche_statistics_0.01_10", "_CA_probabilities.txt", [2], first_column_par=False) +mergeRawData(".", "p_act_averaged_", "p_act_7h_NOOVERLAP.txt", remove_raw=True, sep_str='\n') + +averageFileColumns("p_act_averaged_Aprimed", "5. Switching after 7 h/OVERLAP10/A", "avalanche_statistics_0.01_10", "_CA_probabilities.txt", [2], first_column_par=False) +averageFileColumns("p_act_averaged_Bprimed", "5. Switching after 7 h/OVERLAP10/B", "avalanche_statistics_0.01_10", "_CA_probabilities.txt", [2], first_column_par=False) +averageFileColumns("p_act_averaged_Cprimed", "5. Switching after 7 h/OVERLAP10/C", "avalanche_statistics_0.01_10", "_CA_probabilities.txt", [2], first_column_par=False) +mergeRawData(".", "p_act_averaged_", "p_act_7h_OVERLAP10.txt", remove_raw=True, sep_str='\n') + +averageFileColumns("p_act_averaged_Aprimed", "5. Switching after 7 h/OVERLAP10 no AC, no ABC/A", "avalanche_statistics_0.01_10", "_CA_probabilities.txt", [2], first_column_par=False) +averageFileColumns("p_act_averaged_Bprimed", "5. Switching after 7 h/OVERLAP10 no AC, no ABC/B", "avalanche_statistics_0.01_10", "_CA_probabilities.txt", [2], first_column_par=False) +averageFileColumns("p_act_averaged_Cprimed", "5. Switching after 7 h/OVERLAP10 no AC, no ABC/C", "avalanche_statistics_0.01_10", "_CA_probabilities.txt", [2], first_column_par=False) +mergeRawData(".", "p_act_averaged_", "p_act_7h_OVERLAP10_noAC_noABC.txt", remove_raw=True, sep_str='\n') + +averageFileColumns("p_act_averaged_Aprimed", "5. Switching after 7 h/OVERLAP10 no BC, no ABC/A", "avalanche_statistics_0.01_10", "_CA_probabilities.txt", [2], first_column_par=False) +averageFileColumns("p_act_averaged_Bprimed", "5. Switching after 7 h/OVERLAP10 no BC, no ABC/B", "avalanche_statistics_0.01_10", "_CA_probabilities.txt", [2], first_column_par=False) +averageFileColumns("p_act_averaged_Cprimed", "5. Switching after 7 h/OVERLAP10 no BC, no ABC/C", "avalanche_statistics_0.01_10", "_CA_probabilities.txt", [2], first_column_par=False) +mergeRawData(".", "p_act_averaged_", "p_act_7h_OVERLAP10_noBC_noABC.txt", remove_raw=True, sep_str='\n') + +os.system('cat "p_act_7h_NOOVERLAP.txt" > "p_act_summary_temp_7h.txt"') +os.system('cat "p_act_7h_OVERLAP10.txt" >> "p_act_summary_temp_7h.txt"') +os.system('cat "p_act_7h_OVERLAP10_noAC_noABC.txt" >> "p_act_summary_temp_7h.txt"') +os.system('cat "p_act_7h_OVERLAP10_noBC_noABC.txt" >> "p_act_summary_temp_7h.txt"') +os.system('rm -R -f "p_act_7h"*') +mergeRawData(".", "p_act_summary_temp_", "p_act_summary_7h.txt", remove_raw=True, sep_str='\t') + +os.system('cp "./1. Priming/NOOVERLAP/A/"*/*"_PARAMS.txt" .') + diff --git a/simulation-bin/run_binary_paper2/priming_and_activation/finalize b/simulation-bin/run_binary_paper2/priming_and_activation/finalize new file mode 100644 index 0000000..217e167 --- /dev/null +++ b/simulation-bin/run_binary_paper2/priming_and_activation/finalize @@ -0,0 +1,4 @@ +#!/bin/sh + +# the following script collects all data from the simulation folders and creates one final data file: +python3 averageFileColumnsAdvancedMod.py diff --git a/simulation-bin/run_binary_paper2/priming_and_activation/mergeRawData.py b/simulation-bin/run_binary_paper2/priming_and_activation/mergeRawData.py new file mode 100755 index 0000000..1c520ff --- /dev/null +++ b/simulation-bin/run_binary_paper2/priming_and_activation/mergeRawData.py @@ -0,0 +1,55 @@ +################################################################### +### Script to merge data from multiple files into a single file ### +################################################################### + +### Copyright 2020-2021 Jannik Luboeinski +### licensed under Apache-2.0 (http://www.apache.org/licenses/LICENSE-2.0) + +from pathlib import Path +import os + +###################################### +# mergeRawData +# Looks in a specified directory for files with a certain substring in the filename and merges them +# (merging the content of the lines) to a single file +# rootpath: relative path to the output directory +# substr: string that the filename of files to be merged has to contain +# output_file: name of the output file +# remove_raw [optional]: removes the raw data files +# sep_str [optional]: the character or string by which to separate the lines in the output file +def mergeRawData(rootpath, substr, output_file, remove_raw=False, sep_str='\t\t'): + + path = Path(rootpath) + num_rows = -1 + all_data = [] + + for x in sorted(path.iterdir()): # loop through files in the output directory + x_str = str(x) + if not x.is_dir() and substr in x_str: + + f = open(x_str) + single_trial_data = f.read() + f.close() + + single_trial_data = single_trial_data.split('\n') + + if single_trial_data[-1] == "": + del single_trial_data[-1] # delete empty line + + if len(single_trial_data) != num_rows: + if num_rows == -1: + num_rows = len(single_trial_data) + all_data = single_trial_data + else: + raise Exception("Wrong number of rows encountered in: " + x_str) + else: + for i in range(num_rows): + all_data[i] += sep_str + single_trial_data[i] + + if remove_raw: + os.remove(x_str) + + fout = open(os.path.join(rootpath, output_file), "w") + for i in range(num_rows): + fout.write(all_data[i] + '\n') + fout.close() diff --git a/simulation-bin/run_binary_paper2/recall/Control/run b/simulation-bin/run_binary_paper2/recall/Control/run new file mode 100644 index 0000000..165de35 --- /dev/null +++ b/simulation-bin/run_binary_paper2/recall/Control/run @@ -0,0 +1,11 @@ +#!/bin/sh + +./netA.out -Nl=50 -Nl_inh=25 -t_max=30 -N_stim=25 -pc=0.1 -learn= -recall=F100D1at10 -r=0.2 -z_max=1 -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="RecallA" +./netB.out -Nl=50 -Nl_inh=25 -t_max=60 -N_stim=25 -pc=0.1 -learn= -recall=F100D1at40 -r=0.2 -z_max=1 -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="RecallB" +./netC.out -Nl=50 -Nl_inh=25 -t_max=90 -N_stim=25 -pc=0.1 -learn= -recall=F100D1at70 -r=0.2 -z_max=1 -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="RecallC" + +cat *RecallA/*spike_raster.txt > spike_raster.txt +cat *RecallB/*spike_raster.txt >> spike_raster.txt +cat *RecallC/*spike_raster.txt >> spike_raster.txt + +python3 assemblyAvalancheStatistics.py "NOOVERLAP" 0.01 10 False diff --git a/simulation-bin/run_binary_paper2/recall/NOOVERLAP/run b/simulation-bin/run_binary_paper2/recall/NOOVERLAP/run new file mode 100644 index 0000000..165de35 --- /dev/null +++ b/simulation-bin/run_binary_paper2/recall/NOOVERLAP/run @@ -0,0 +1,11 @@ +#!/bin/sh + +./netA.out -Nl=50 -Nl_inh=25 -t_max=30 -N_stim=25 -pc=0.1 -learn= -recall=F100D1at10 -r=0.2 -z_max=1 -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="RecallA" +./netB.out -Nl=50 -Nl_inh=25 -t_max=60 -N_stim=25 -pc=0.1 -learn= -recall=F100D1at40 -r=0.2 -z_max=1 -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="RecallB" +./netC.out -Nl=50 -Nl_inh=25 -t_max=90 -N_stim=25 -pc=0.1 -learn= -recall=F100D1at70 -r=0.2 -z_max=1 -w_ei=2 -w_ie=3.5 -w_ii=3.5 -output_period=10 -I_const=0.15 -sigma_WN=0.05 -theta_p=3.0 -theta_d=1.2 -purpose="RecallC" + +cat *RecallA/*spike_raster.txt > spike_raster.txt +cat *RecallB/*spike_raster.txt >> spike_raster.txt +cat *RecallC/*spike_raster.txt >> spike_raster.txt + +python3 assemblyAvalancheStatistics.py "NOOVERLAP" 0.01 10 False diff --git a/simulation-bin/run_binary_paper2/recall/netA.out b/simulation-bin/run_binary_paper2/recall/netA.out new file mode 100755 index 0000000..716b071 Binary files /dev/null and b/simulation-bin/run_binary_paper2/recall/netA.out differ diff --git a/simulation-bin/run_binary_paper2/recall/netB.out b/simulation-bin/run_binary_paper2/recall/netB.out new file mode 100755 index 0000000..50e1f43 Binary files /dev/null and b/simulation-bin/run_binary_paper2/recall/netB.out differ diff --git a/simulation-bin/run_binary_paper2/recall/netC.out b/simulation-bin/run_binary_paper2/recall/netC.out new file mode 100755 index 0000000..ba214c4 Binary files /dev/null and b/simulation-bin/run_binary_paper2/recall/netC.out differ diff --git a/simulation-bin/run_binary_paper2/recall/runner b/simulation-bin/run_binary_paper2/recall/runner new file mode 100644 index 0000000..9d551d2 --- /dev/null +++ b/simulation-bin/run_binary_paper2/recall/runner @@ -0,0 +1,28 @@ +#!/bin/sh +organization_dir="../organization/" + +### NOOVERLAP ### +cd NOOVERLAP +cp "../netA.out" . +cp "../netB.out" . +cp "../netC.out" . +cp "../../../../analysis/overlapParadigms.py" . +cp "../../../../analysis/utilityFunctions.py" . +cp "../../../../analysis/assemblyAvalancheStatistics.py" . +cp "../$organization_dir/3rd/NOOVERLAP/connections.txt" . +cp "../$organization_dir/3rd/NOOVERLAP/"*/*/*"_net_28810.0.txt" coupling_strengths.txt +screen -d -m /bin/sh run + +### Control ### +cd ../Control +cp "../netA.out" . +cp "../netB.out" . +cp "../netC.out" . +cp "../../../../analysis/overlapParadigms.py" . +cp "../../../../analysis/utilityFunctions.py" . +cp "../../../../analysis/assemblyAvalancheStatistics.py" . +cp "../$organization_dir/1st/FIRST/connections.txt" . +cp "../$organization_dir/1st/FIRST/"*/*/*"_net_0.0.txt" coupling_strengths.txt +screen -d -m /bin/sh run + +cd .. diff --git a/simulation-bin/run_binary_paper2/run_activation b/simulation-bin/run_binary_paper2/run_activation new file mode 100644 index 0000000..7a1c39b --- /dev/null +++ b/simulation-bin/run_binary_paper2/run_activation @@ -0,0 +1,5 @@ +#!/bin/sh + +cd activation/ +/bin/sh "runner" +cd .. diff --git a/simulation-bin/run_binary_paper2/run_learn_cons b/simulation-bin/run_binary_paper2/run_learn_cons new file mode 100755 index 0000000..9ca9bfb --- /dev/null +++ b/simulation-bin/run_binary_paper2/run_learn_cons @@ -0,0 +1,7 @@ +#!/bin/sh + +# learn A, B, C, and consolidate them + +cd organization/1st/FIRST +screen -d -m /bin/sh run # run process(es) in the background using 'screen' +cd ../../.. diff --git a/simulation-bin/run_binary_paper2/run_learn_cons_noLTD b/simulation-bin/run_binary_paper2/run_learn_cons_noLTD new file mode 100755 index 0000000..fa60891 --- /dev/null +++ b/simulation-bin/run_binary_paper2/run_learn_cons_noLTD @@ -0,0 +1,7 @@ +#!/bin/sh + +# learn A, B, C, and consolidate them + +cd "organization_noLTD/1st/FIRST" +screen -d -m /bin/sh run # run process(es) in the background using 'screen' +cd ../../.. diff --git a/simulation-bin/run_binary_paper2/run_priming_and_activation b/simulation-bin/run_binary_paper2/run_priming_and_activation new file mode 100644 index 0000000..45f4dda --- /dev/null +++ b/simulation-bin/run_binary_paper2/run_priming_and_activation @@ -0,0 +1,6 @@ +#!/bin/sh + +cd "priming_and_activation/1. Priming/" +/bin/sh "runner" + +cd ../.. diff --git a/simulation-bin/run_binary_paper2/run_recall b/simulation-bin/run_binary_paper2/run_recall new file mode 100644 index 0000000..a3e9249 --- /dev/null +++ b/simulation-bin/run_binary_paper2/run_recall @@ -0,0 +1,5 @@ +#!/bin/sh + +cd recall/ +/bin/sh "runner" +cd .. diff --git a/simulation-code/Definitions.hpp b/simulation-code/Definitions.hpp index 681af23..32704ce 100755 --- a/simulation-code/Definitions.hpp +++ b/simulation-code/Definitions.hpp @@ -43,6 +43,27 @@ #define CREATE_SCRIPT 1 // creates a gnuplot file for a plot showing the detailed interneuronal connections within the excitatory network #define CREATE_PLOT 2 // creates the gnuplot file +// CELL ASSEMBLIES +#define FIRST 1 // simply use the first block of neurons in the network as the assembly (would equal a hypothetical "OVERLAP100") +#define SECOND 2 // simply use the second distinct block of neurons in the network as the assembly (would equal a hypothetical "OVERLAP0") +#define OVERLAP10_2ND 3 // use a second block of neurons as the assembly, overlapping by 10% with the first assembly +#define OVERLAP15_2ND 4 // use a second block of neurons as the assembly, overlapping by 15% with the first assembly +#define OVERLAP20_2ND 5 // use a second block of neurons as the assembly, overlapping by 20% with the first assembly +#define THIRD 6 // simply use the third distinct block of neurons in the network as the assembly +#define OVERLAP10_3RD 7 // use a third block of neurons as the assembly, overlapping by 5% with the first assembly exclusively, by 5% with the second assembly exclusively, and by 5% with both +#define OVERLAP10_3RD_NO_ABC 8 // use a third block of neurons as the assembly, overlapping by 10% with the first assembly exclusively, and by 10% with the second assembly exclusively +#define OVERLAP10_3RD_NO_AC_NO_ABC 9 // use a third block of neurons as the assembly, overlapping by 10% with the first assembly exclusively +#define OVERLAP10_3RD_NO_BC_NO_ABC 10 // use a third block of neurons as the assembly, overlapping by 10% with the second assembly exclusively +#define OVERLAP15_3RD 11 // use a third block of neurons as the assembly, overlapping by 7.5% with the first assembly exclusively, by 7.5% with the second assembly exclusively, and by 7.5% with both +#define OVERLAP15_3RD_NO_ABC 12 // use a third block of neurons as the assembly, overlapping by 15% with the first assembly exclusively, and by 15% with the second assembly exclusively +#define OVERLAP15_3RD_NO_AC_NO_ABC 13 // use a third block of neurons as the assembly, overlapping by 15% with the first assembly exclusively +#define OVERLAP15_3RD_NO_BC_NO_ABC 14 // use a third block of neurons as the assembly, overlapping by 15% with the second assembly exclusively +#define OVERLAP20_3RD 15 // use a third block of neurons as the assembly, overlapping by 10% with the first assembly exclusively, by 10% with the second assembly exclusively, and by 10% with both +#define OVERLAP20_3RD_NO_ABC 16 // use a third block of neurons as the assembly, overlapping by 20% with the first assembly exclusively, and by 20% with the second assembly exclusively +#define OVERLAP20_3RD_NO_AC_NO_ABC 17 // use a third block of neurons as the assembly, overlapping by 20% with the first assembly exclusively +#define OVERLAP20_3RD_NO_BC_NO_ABC 18 // use a third block of neurons as the assembly, overlapping by 20% with the second assembly exclusively +#define RAND 19 // use randomly selected neurons as the assembly + // PLASTICITY #define CALCIUM 1 // use the calcium model as plasticity mechanism (early phase only) #define CALCIUM_AND_STC 2 // use the calcium model with synaptic and capture as plasticity mechanism diff --git a/simulation-code/Network.cpp b/simulation-code/Network.cpp index 26c5c01..971d0e7 100755 --- a/simulation-code/Network.cpp +++ b/simulation-code/Network.cpp @@ -36,10 +36,10 @@ friend class boost::serialization::access; private: /*** Computational parameters ***/ -double dt; // s, one time step for numerical simulation +double dt; // s, one timestep for numerical simulation int N; // total number of excitatory plus inhibitory neurons -int t_syn_delay_steps; // constant t_syn_delay converted to time steps -int t_Ca_delay_steps; // constant t_Ca_delay converted to time steps +int t_syn_delay_steps; // constant t_syn_delay converted to timesteps +int t_Ca_delay_steps; // constant t_Ca_delay converted to timesteps /*** State variables ***/ vector neurons; // vector of all N neuron instances (first excitatory, then inhibitory) @@ -59,10 +59,10 @@ double* sum_h_diff_d; // sum of E-LTD changes for each postsynaptic neuron protected: /*** Physical parameters ***/ -int Nl; // number of neurons in one line (row or column) of the exc. population (better choose an odd number, for there exists a "central" neuron) +int Nl_exc; // number of neurons in one line (row or column) of the exc. population (better choose an odd number, for there exists a "central" neuron) int Nl_inh; // number of neurons in one line (row or column) of the inh. population (better choose an odd number, for there exists a "central" neuron) double tau_syn; // s, the synaptic time constant -double t_syn_delay; // s, the synaptic transmission delay for PSPs - has to be at least one time step! +double t_syn_delay; // s, the synaptic transmission delay for PSPs - has to be at least one timestep! double p_c; // connection probability (prob. that a directed connection exists) double w_ee; // nC, magnitude of excitatory PSP effecting an excitatory postsynaptic neuron double w_ei; // nC, magnitude of excitatory PSP effecting an inhibitory postsynaptic neuron @@ -70,7 +70,7 @@ double w_ie; // nC, magnitude of inhibitory PSP effecting an excitatory postsyna double w_ii; // nC, magnitude of inhibitory PSP effecting an inhibitory postsynaptic neuron /*** Plasticity parameters ***/ -double t_Ca_delay; // s, delay for spikes to affect calcium dynamics - has to be at least one time step! +double t_Ca_delay; // s, delay for spikes to affect calcium dynamics - has to be at least one timestep! double Ca_pre; // s^-1, increase in calcium current evoked by presynaptic spike double Ca_post; // s^-1, increase in calcium current evoked by postsynaptic spike double tau_Ca; // s, time constant for calcium dynamics @@ -134,19 +134,19 @@ int tb_max_sum_diff_d; // time bin at which max_sum_diff_d was encountered * aware that it starts with zero, unlike i and j * * - int i: the row where the neuron is located * * - int j: the column where the neuron is located */ -#define cNN(i, j) (((i)-1)*Nl + ((j)-1)) +#define cNN(i, j) (((i)-1)*Nl_exc + ((j)-1)) /*** row (macro) *** * Returns the row number for excitatory neuron n, be * * aware that it starts with one, unlike the consecutive number * * - int n: the consecutive neuron number */ -#define row(n) (rowG(n, Nl)) +#define row(n) (rowG(n, Nl_exc)) /*** col (macro) *** * Returns the column number for excitatory neuron n, be * * aware that it starts with one, unlike the consecutive number * * - int n: the consecutive neuron number */ -#define col(n) (colG(n, Nl)) +#define col(n) (colG(n, Nl_exc)) /*** symm (macro) *** * Returns the number of the symmetric element for an element given * @@ -172,7 +172,7 @@ bool shallBeConnected(int m, int n) } #else // exc.->exc. synapse - if (m < pow2(Nl) && n < pow2(Nl)) + if (m < pow2(Nl_exc) && n < pow2(Nl_exc)) { if (u_dist(rg) <= p_c) // draw random number { @@ -183,7 +183,7 @@ bool shallBeConnected(int m, int n) } // exc.->inh. synapse - else if (m < pow2(Nl) && n >= pow2(Nl)) + else if (m < pow2(Nl_exc) && n >= pow2(Nl_exc)) { if (u_dist(rg) <= p_c) // draw random number { @@ -195,7 +195,7 @@ bool shallBeConnected(int m, int n) } // inh.->exc. synapse - else if (m >= pow2(Nl) && n < pow2(Nl)) + else if (m >= pow2(Nl_exc) && n < pow2(Nl_exc)) { if (u_dist(rg) <= p_c) // draw random number { @@ -206,7 +206,7 @@ bool shallBeConnected(int m, int n) } // inh.->inh. synapse - else if (m >= pow2(Nl) && n >= pow2(Nl)) + else if (m >= pow2(Nl_exc) && n >= pow2(Nl_exc)) { if (u_dist(rg) <= p_c) // draw random number { @@ -238,7 +238,7 @@ void saveNetworkParams(ofstream *f) const { *f << endl; *f << "Network parameters:" << endl; - *f << "N_exc = " << pow2(Nl) << " (" << Nl << " x " << Nl << ")" << endl; + *f << "N_exc = " << pow2(Nl_exc) << " (" << Nl_exc << " x " << Nl_exc << ")" << endl; *f << "N_inh = " << pow2(Nl_inh) << " (" << Nl_inh << " x " << Nl_inh << ")" << endl; *f << "tau_syn = " #if SYNAPSE_MODEL == DELTA @@ -275,15 +275,17 @@ void saveNetworkParams(ofstream *f) const *f << "gamma_d = " << gamma_d << endl; *f << "theta_p = " << theta_p << endl; *f << "theta_d = " << theta_d << endl; - *f << "sigma_plasticity = " << sigma_plasticity << " nA s" << endl; + *f << "sigma_plasticity = " << dtos(sigma_plasticity/h_0,2) << " h_0" << endl; *f << "alpha_p = " << alpha_p << endl; *f << "alpha_c = " << alpha_c << endl; *f << "alpha_d = " << alpha_d << endl; - *f << "theta_pro_p = " << theta_pro_p << " nA s" << endl; - *f << "theta_pro_c = " << theta_pro_c << " nA s" << endl; - *f << "theta_pro_d = " << theta_pro_d << " nA s" << endl; - *f << "theta_tag_p = " << theta_tag_p << " nA s" << endl; - *f << "theta_tag_d = " << theta_tag_d << " nA s" << endl; + + double nm = 1. / (theta_pro_c/h_0) - 0.001; // compute neuromodulator concentration from threshold theta_pro_c + *f << "theta_pro_p = " << dtos(theta_pro_p/h_0,2) << " h_0" << endl; + *f << "theta_pro_c = " << dtos(theta_pro_c/h_0,2) << " h_0 (nm = " << dtos(nm,2) << ")" << endl; + *f << "theta_pro_d = " << dtos(theta_pro_d/h_0,2) << " h_0" << endl; + *f << "theta_tag_p = " << dtos(theta_tag_p/h_0,2) << " h_0" << endl; + *f << "theta_tag_d = " << dtos(theta_tag_d/h_0,2) << " h_0" << endl; neurons[0].saveNeuronParams(f); // all neurons have the same parameters, take the first one } @@ -291,7 +293,7 @@ void saveNetworkParams(ofstream *f) const /*** saveNetworkState *** * Saves the current state of the whole network to a given file using boost function serialize(...) * * - file: the file to read the data from * - * - tb: current time step */ + * - tb: current timestep */ void saveNetworkState(string file, int tb) { ofstream savefile(file); @@ -357,7 +359,7 @@ template void serialize(Archive &ar, const unsigned int version) /*** processTimeStep *** * Processes one timestep (of duration dt) for the network [rich mode / compmode == 1] * - * - int tb: current time step (for evaluating stimulus and for computing spike contributions) * + * - int tb: current timestep (for evaluating stimulus and for computing spike contributions) * * - ofstream* txt_spike_raster [optional]: file containing spike times for spike raster plot * * - return: number of spikes that occurred within the considered timestep in the whole network */ int processTimeStep(int tb, ofstream* txt_spike_raster = NULL) @@ -374,7 +376,7 @@ int processTimeStep(int tb, ofstream* txt_spike_raster = NULL) { neurons[m].processTimeStep(tb, -1); // computation of individual neuron dynamics - // add spikes to raster plot and count spikes in this time step + // add spikes to raster plot and count spikes in this timestep if (neurons[m].getActivity()) { #if SPIKE_PLOTTING == RASTER || SPIKE_PLOTTING == NUMBER_AND_RASTER @@ -478,7 +480,7 @@ int processTimeStep(int tb, ofstream* txt_spike_raster = NULL) #if PLASTICITY == CALCIUM || PLASTICITY == CALCIUM_AND_STC bool delayed_Ca = false; // specifies if a presynaptic spike occurred t_Ca_delay ago - if (m < pow2(Nl)) // plasticity only for exc. -> exc. connections + if (m < pow2(Nl_exc)) // plasticity only for exc. -> exc. connections { // go through presynaptic spikes for calcium contribution; start from last one that was used plus one for (int k=last_Ca_spike_index[m]; k<=neurons[m].getSpikeHistorySize(); k++) @@ -541,7 +543,7 @@ int processTimeStep(int tb, ofstream* txt_spike_raster = NULL) } // Long-term plasticity - if (m < pow2(Nl) && n < pow2(Nl)) // plasticity only for exc. -> exc. connections + if (m < pow2(Nl_exc) && n < pow2(Nl_exc)) // plasticity only for exc. -> exc. connections { #if PLASTICITY == CALCIUM || PLASTICITY == CALCIUM_AND_STC // Calcium dynamics @@ -550,11 +552,15 @@ int processTimeStep(int tb, ofstream* txt_spike_raster = NULL) if (delayed_Ca) // if presynaptic spike occurred t_Ca_delay ago Ca[m][n] += Ca_pre; - if (neurons[n].getActivity()) // if postsynaptic spike occurred in previous time step + if (neurons[n].getActivity()) // if postsynaptic spike occurred in previous timestep Ca[m][n] += Ca_post; // E-LTP/-LTD - if (Ca[m][n] >= theta_p) // if there is E-LTP + if ((Ca[m][n] >= theta_p) // if there is E-LTP and "STDP-like" condition is fulfilled + #if LTP_FR_THRESHOLD > 0 + && (neurons[m].spikesInInterval(tb-2500,tb+1) > LTP_FR_THRESHOLD/2 && neurons[n].spikesInInterval(tb-2500,tb+1) > LTP_FR_THRESHOLD/2) + #endif + ) { double noise = sigma_plasticity * sqrt(tau_h) * sqrt(2) * norm_dist(rg) / sqrt(dt); // division by sqrt(dt) was not in Li et al., 2016 double C = 0.1 + gamma_p + gamma_d; @@ -568,7 +574,11 @@ int processTimeStep(int tb, ofstream* txt_spike_raster = NULL) tb_max_dev = tb; } } - else if (Ca[m][n] >= theta_d) // if there is E-LTD + else if ((Ca[m][n] >= theta_d) // if there is E-LTD + #if LTD_FR_THRESHOLD > 0 + && (neurons[m].spikesInInterval(tb-2500,tb+1) > LTD_FR_THRESHOLD/2 && neurons[n].spikesInInterval(tb-2500,tb+1) > LTD_FR_THRESHOLD/2) + #endif + ) { double noise = sigma_plasticity * sqrt(tau_h) * norm_dist(rg) / sqrt(dt); // division by sqrt(dt) was not in Li et al., 2016 double C = 0.1 + gamma_d; @@ -607,11 +617,11 @@ int processTimeStep(int tb, ofstream* txt_spike_raster = NULL) if (h_dev >= theta_tag_p) // LTP { #if PROTEIN_POOLS == POOLS_PCD - double pa = neurons[n].getPProteinAmount()*neurons[n].getCProteinAmount(); // LTP protein amount times common protein amount from previous time step + double pa = neurons[n].getPProteinAmount()*neurons[n].getCProteinAmount(); // LTP protein amount times common protein amount from previous timestep #elif PROTEIN_POOLS == POOLS_PD - double pa = neurons[n].getPProteinAmount(); // LTP protein amountfrom previous time step + double pa = neurons[n].getPProteinAmount(); // LTP protein amountfrom previous timestep #elif PROTEIN_POOLS == POOLS_C - double pa = neurons[n].getCProteinAmount(); // common protein amount from previous time step + double pa = neurons[n].getCProteinAmount(); // common protein amount from previous timestep #endif #ifdef TWO_NEURONS_ONE_SYNAPSE tag_glob = true; @@ -627,11 +637,11 @@ int processTimeStep(int tb, ofstream* txt_spike_raster = NULL) else if (-h_dev >= theta_tag_d) // LTD { #if PROTEIN_POOLS == POOLS_PCD - double pa = neurons[n].getDProteinAmount()*neurons[n].getCProteinAmount(); // LTD protein amount times common protein amount from previous time step + double pa = neurons[n].getDProteinAmount()*neurons[n].getCProteinAmount(); // LTD protein amount times common protein amount from previous timestep #elif PROTEIN_POOLS == POOLS_PD - double pa = neurons[n].getDProteinAmount(); // LTD protein amountfrom previous time step + double pa = neurons[n].getDProteinAmount(); // LTD protein amountfrom previous timestep #elif PROTEIN_POOLS == POOLS_C - double pa = neurons[n].getCProteinAmount(); // common protein amount from previous time step + double pa = neurons[n].getCProteinAmount(); // common protein amount from previous timestep #endif #ifdef TWO_NEURONS_ONE_SYNAPSE tag_glob = true; @@ -658,7 +668,7 @@ int processTimeStep(int tb, ofstream* txt_spike_raster = NULL) double tau_mt_stdp = tau_syn_stdp * tau_m_stdp / (tau_syn_stdp + tau_m_stdp); double eta = 12e-2; - // if presynaptic neuron m spiked in previous time step + // if presynaptic neuron m spiked in previous timestep if (delayed_PSP) { int last_post_spike = neurons[n].getSpikeHistorySize(); @@ -676,7 +686,7 @@ int processTimeStep(int tb, ofstream* txt_spike_raster = NULL) } - // if postsynaptic neuron n spiked in previous time step + // if postsynaptic neuron n spiked in previous timestep bool delayed_PSP2 = false; for (int k=neurons[n].getSpikeHistorySize(); k>0; k--) { @@ -724,7 +734,7 @@ int processTimeStep(int tb, ofstream* txt_spike_raster = NULL) /*** processTimeStep_FF *** * Processes one timestep for the network only computing late-phase observables [fast-forward mode / compmode == 2] * - * - int tb: current time step (for printing purposes only) * + * - int tb: current timestep (for printing purposes only) * * - double delta_t: duration of the fast-forward timestep * * - ofstream* logf: pointer to log file handle (for printing interesting information) * * - return: true if late-phase dynamics are persisting, false if not */ @@ -854,7 +864,7 @@ int processTimeStep_FF(int tb, double delta_t, ofstream* logf) double h_dev; // the deviation of the early-phase weight from its resting state // Long-term plasticity - if (m < pow2(Nl) && n < pow2(Nl)) // plasticity only for exc. -> exc. connections + if (m < pow2(Nl_exc) && n < pow2(Nl_exc)) // plasticity only for exc. -> exc. connections { #if PLASTICITY == CALCIUM || PLASTICITY == CALCIUM_AND_STC @@ -881,11 +891,11 @@ int processTimeStep_FF(int tb, double delta_t, ofstream* logf) if (h_dev >= theta_tag_p) // LTP { #if PROTEIN_POOLS == POOLS_PCD - double pa = neurons[n].getPProteinAmount()*neurons[n].getCProteinAmount(); // LTP protein amount times common protein amount from previous time step + double pa = neurons[n].getPProteinAmount()*neurons[n].getCProteinAmount(); // LTP protein amount times common protein amount from previous timestep #elif PROTEIN_POOLS == POOLS_PD - double pa = neurons[n].getPProteinAmount(); // LTP protein amountfrom previous time step + double pa = neurons[n].getPProteinAmount(); // LTP protein amountfrom previous timestep #elif PROTEIN_POOLS == POOLS_C - double pa = neurons[n].getCProteinAmount(); // common protein amount from previous time step + double pa = neurons[n].getCProteinAmount(); // common protein amount from previous timestep #endif #ifdef TWO_NEURONS_ONE_SYNAPSE tag_glob = true; @@ -901,11 +911,11 @@ int processTimeStep_FF(int tb, double delta_t, ofstream* logf) else if (-h_dev >= theta_tag_d) // LTD { #if PROTEIN_POOLS == POOLS_PCD - double pa = neurons[n].getDProteinAmount()*neurons[n].getCProteinAmount(); // LTD protein amount times common protein amount from previous time step + double pa = neurons[n].getDProteinAmount()*neurons[n].getCProteinAmount(); // LTD protein amount times common protein amount from previous timestep #elif PROTEIN_POOLS == POOLS_PD - double pa = neurons[n].getDProteinAmount(); // LTD protein amountfrom previous time step + double pa = neurons[n].getDProteinAmount(); // LTD protein amountfrom previous timestep #elif PROTEIN_POOLS == POOLS_C - double pa = neurons[n].getCProteinAmount(); // common protein amount from previous time step + double pa = neurons[n].getCProteinAmount(); // common protein amount from previous timestep #endif #ifdef TWO_NEURONS_ONE_SYNAPSE tag_glob = true; @@ -1067,7 +1077,7 @@ void setRhombStimulus(Stimulus& _st, int center, int radius) for (int j=-num_cols; j<=num_cols; j++) { - neurons[center+i*Nl+j].setCurrentStimulus(_st); // set temporal course of current stimulus for given neuron + neurons[center+i*Nl_exc+j].setCurrentStimulus(_st); // set temporal course of current stimulus for given neuron } } @@ -1096,7 +1106,7 @@ void setRhombPartialRandomStimulus(Stimulus& _st, int center, int radius, double for (int j=-num_cols; j<=num_cols; j++) { - indices[ind++] = center+i*Nl+j; + indices[ind++] = center+i*Nl_exc+j; } } @@ -1134,7 +1144,7 @@ void setRhombPartialStimulus(Stimulus& _st, int center, int radius, double fract for (int j=-num_cols; j<=num_cols; j++) { - neurons[center+i*Nl+j].setCurrentStimulus(_st); // set temporal course of current stimulus for given neuron + neurons[center+i*Nl_exc+j].setCurrentStimulus(_st); // set temporal course of current stimulus for given neuron count--; if (count == 0) break; @@ -1155,7 +1165,7 @@ void setRhombPartialStimulus(Stimulus& _st, int center, int radius, double fract * - int range_end [optional]: one plus the highest neuron number that can be drawn (-1: highest possible) */ void setRandomStimulus(Stimulus& _st, int num, ofstream* f = NULL, int range_start=0, int range_end=-1) { - int range_len = (range_end == -1) ? (pow2(Nl) - range_start) : (range_end - range_start); // the number of neurons eligible for being drawn + int range_len = (range_end == -1) ? (pow2(Nl_exc) - range_start) : (range_end - range_start); // the number of neurons eligible for being drawn bool* stim_neurons = new bool [range_len]; uniform_int_distribution u_dist_neurons(0, range_len-1); // uniform distribution to draw neuron numbers int neurons_left = num; @@ -1239,7 +1249,7 @@ void stipulateRhombAssembly(int center, int radius) for (int j=-num_cols; j<=num_cols; j++) { - int m = center+i*Nl+j; + int m = center+i*Nl_exc+j; for (int k=-radius; k<=radius; k++) { @@ -1247,7 +1257,7 @@ void stipulateRhombAssembly(int center, int radius) for (int l=-num_cols; l<=num_cols; l++) { - int n = center+k*Nl+l; + int n = center+k*Nl_exc+l; if (conn[m][n]) // set all connections within the assembly to this value h[m][n] = value; @@ -1349,7 +1359,7 @@ double getLateSynapticStrength(synapse s) const /*** getMeanEarlySynapticStrength *** * Returns the mean early-phase synaptic strength (averaged over all synapses within the given set of neurons) * - * - int n: the number of neurons that shall be considered (e.g., n=Nl^2 for all excitatory neurons, or n=N for all neurons) * + * - int n: the number of neurons that shall be considered (e.g., n=Nl_exc^2 for all excitatory neurons, or n=N for all neurons) * * - int off [optional]: the offset that defines at which neuron number the considered range begins * * - return: the mean early-phase synaptic strength */ double getMeanEarlySynapticStrength(int n, int off=0) const @@ -1377,7 +1387,7 @@ double getMeanEarlySynapticStrength(int n, int off=0) const /*** getMeanLateSynapticStrength *** * Returns the mean late-phase synaptic strength (averaged over all synapses within the given set of neurons) * - * - int n: the number of neurons that shall be considered (e.g., n=Nl^2 for all excitatory neurons, or n=N for all neurons) * + * - int n: the number of neurons that shall be considered (e.g., n=Nl_exc^2 for all excitatory neurons, or n=N for all neurons) * * - int off [optional]: the offset that defines at which neuron number the considered range begins * * - return: the mean late-phase synaptic strength */ double getMeanLateSynapticStrength(int n, int off=0) const @@ -1406,7 +1416,7 @@ double getMeanLateSynapticStrength(int n, int off=0) const /*** getSDEarlySynapticStrength *** * Returns the standard deviation of the early-phase synaptic strength (over all synapses within the given set of neurons) * * - double mean: the mean of the early-phase syn. strength within the given set - * - int n: the number of neurons that shall be considered (e.g., n=Nl^2 for all excitatory neurons, or n=N for all neurons) * + * - int n: the number of neurons that shall be considered (e.g., n=Nl_exc^2 for all excitatory neurons, or n=N for all neurons) * * - int off [optional]: the offset that defines at which neuron number the considered range begins * * - return: the std. dev. of the early-phase synaptic strength */ double getSDEarlySynapticStrength(double mean, int n, int off=0) const @@ -1435,7 +1445,7 @@ double getSDEarlySynapticStrength(double mean, int n, int off=0) const /*** getSDLateSynapticStrength *** * Returns the standard deviation of the late-phase synaptic strength (over all synapses within the given set of neurons) * * - double mean: the mean of the late-phase syn. strength within the given set - * - int n: the number of neurons that shall be considered (e.g., n=Nl^2 for all excitatory neurons, or n=N for all neurons) * + * - int n: the number of neurons that shall be considered (e.g., n=Nl_exc^2 for all excitatory neurons, or n=N for all neurons) * * - int off [optional]: the offset that defines at which neuron number the considered range begins * * - return: the std. dev. of the late-phase synaptic strength */ double getSDLateSynapticStrength(double mean, int n, int off=0) const @@ -1463,7 +1473,7 @@ double getSDLateSynapticStrength(double mean, int n, int off=0) const /*** getMeanCProteinAmount *** * Returns the mean protein amount (averaged over all neurons within the given set) * - * - int n: the number of neurons that shall be considered (e.g., n=Nl^2 for all excitatory neurons, or n=N for all neurons) * + * - int n: the number of neurons that shall be considered (e.g., n=Nl_exc^2 for all excitatory neurons, or n=N for all neurons) * * - int off [optional]: the offset that defines at which neuron number the considered range begins * * - return: the mean protein amount */ double getMeanCProteinAmount(int n, int off=0) const @@ -1547,25 +1557,25 @@ int readConnections(string file, int format = 0) } else if (buf[i] == '1') { - if (m < pow2(Nl) && n < pow2(Nl)) // exc. -> exc. + if (m < pow2(Nl_exc) && n < pow2(Nl_exc)) // exc. -> exc. { neurons[m].addOutgoingConnection(n, TYPE_EXC); //cout << m << " -> " << n << " added" << endl; neurons[n].incNumberIncoming(TYPE_EXC); } - else if (m < pow2(Nl) && n >= pow2(Nl)) // exc. -> inh. + else if (m < pow2(Nl_exc) && n >= pow2(Nl_exc)) // exc. -> inh. { neurons[m].addOutgoingConnection(n, TYPE_INH); //cout << m << " -> " << n << " added" << endl; neurons[n].incNumberIncoming(TYPE_EXC); } - else if (m >= pow2(Nl) && n < pow2(Nl)) // inh. -> exc. + else if (m >= pow2(Nl_exc) && n < pow2(Nl_exc)) // inh. -> exc. { neurons[m].addOutgoingConnection(n, TYPE_EXC); //cout << m << " -> " << n << " added" << endl; neurons[n].incNumberIncoming(TYPE_INH); } - else if (m >= pow2(Nl) && n >= pow2(Nl)) // inh. -> inh. + else if (m >= pow2(Nl_exc) && n >= pow2(Nl_exc)) // inh. -> inh. { neurons[m].addOutgoingConnection(n, TYPE_INH); //cout << m << " -> " << n << " added" << endl; @@ -1699,13 +1709,13 @@ int printAllInitialWeights(string file, int format = 0) // Output of all initial weights if (conn[m][n]) { - if (m < pow2(Nl) && n < pow2(Nl)) // exc. -> exc. + if (m < pow2(Nl_exc) && n < pow2(Nl_exc)) // exc. -> exc. f << h[m][n] << " "; - else if (m < pow2(Nl) && n >= pow2(Nl)) // exc. -> inh. + else if (m < pow2(Nl_exc) && n >= pow2(Nl_exc)) // exc. -> inh. f << w_ei << " "; - else if (m >= pow2(Nl) && n < pow2(Nl)) // inh. -> exc. + else if (m >= pow2(Nl_exc) && n < pow2(Nl_exc)) // inh. -> exc. f << w_ie << " "; - else if (m >= pow2(Nl) && n >= pow2(Nl)) // inh. -> inh. + else if (m >= pow2(Nl_exc) && n >= pow2(Nl_exc)) // inh. -> inh. f << w_ii << " "; } @@ -1763,7 +1773,7 @@ int readCouplingStrengths(string file) { if (phase == 1) // now begins the second phase { - if (m != pow2(Nl) || n != pow2(Nl)) // if dimensions do not match + if (m != pow2(Nl_exc) || n != pow2(Nl_exc)) // if dimensions do not match { f.close(); return 0; @@ -1780,7 +1790,7 @@ int readCouplingStrengths(string file) f.close(); - if (m != pow2(Nl) || n != pow2(Nl)) // if dimensions do not match + if (m != pow2(Nl_exc) || n != pow2(Nl_exc)) // if dimensions do not match { return 0; } @@ -1790,7 +1800,7 @@ int readCouplingStrengths(string file) /*** setStimulationEnd *** * Tells the Network instance the end of stimulation (even if not all stimuli are yet set) * - * - int stim_end: the time step in which stimulation ends */ + * - int stim_end: the timestep in which stimulation ends */ void setStimulationEnd(int stim_end) { if (stim_end > stimulation_end) @@ -1799,7 +1809,7 @@ void setStimulationEnd(int stim_end) /*** setSpikeStorageTime *** - * Sets the number of timesteps for which spikes have to be kept * + * Sets the number of timesteps for which spikes have to be kept in RAM * * - int storage_steps: the size of the storage timespan in timesteps */ void setSpikeStorageTime(int storage_steps) { @@ -1922,18 +1932,18 @@ void setPSThresholds(double _theta_pro_P, double _theta_pro_C, double _theta_pro * Sets all parameters, creates neurons and synapses * * --> it is required to call setSynTimeConstant and setCouplingStrengths immediately * * after calling this constructor! * - * - double _dt: the length of one time step in s * - * - int _Nl: the number of neurons in one line in excitatory population (row/column) * + * - double _dt: the length of one timestep in s * + * - int _Nl_exc: the number of neurons in one line in excitatory population (row/column) * * - int _Nl_inh: the number of neurons in one line in inhibitory population (row/column) - line structure so that stimulation of inhib. * population could be implemented more easily * * - double _p_c: connection probability * * - double _sigma_plasticity: standard deviation of the plasticity * * - double _z_max: the upper z bound */ -Network(const double _dt, const int _Nl, const int _Nl_inh, double _p_c, double _sigma_plasticity, double _z_max) : - dt(_dt), rg(getClockSeed()), u_dist(0.0,1.0), norm_dist(0.0,1.0), Nl(_Nl), Nl_inh(_Nl_inh), z_max(_z_max) +Network(const double _dt, const int _Nl_exc, const int _Nl_inh, double _p_c, double _sigma_plasticity, double _z_max) : + dt(_dt), rg(getClockSeed()), u_dist(0.0,1.0), norm_dist(0.0,1.0), Nl_exc(_Nl_exc), Nl_inh(_Nl_inh), z_max(_z_max) { - N = pow2(Nl) + pow2(Nl_inh); // total number of neurons + N = pow2(Nl_exc) + pow2(Nl_inh); // total number of neurons p_c = _p_c; // set connection probability @@ -1986,7 +1996,7 @@ Network(const double _dt, const int _Nl, const int _Nl_inh, double _p_c, double for (int m=0; mI coupling strength in units of h_0 double w_ie; // I->E coupling strength in units of h_0 double w_ii; // I->I coupling strength in units of h_0 const double t_wfr; // s, size of the time window for computing instantaneous firing rates -const int wfr; // size of the time window for computing instantaneous firing rates in time steps +const int wfr; // timesteps, size of the time window for computing instantaneous firing rates string prot_learn; // the stimulation protocol for training string prot_recall; // the stimulation protocol for recall double recall_fraction; // recall stimulus is applied to this fraction of the original assembly @@ -68,14 +72,17 @@ int N_stim; // the number of hypothetical synapses per neuron that are used for bool ff_enabled; // specifies if fast-forward mode can be used Network net; // the network double z_max; // maximum late-phase coupling strength +#if OSCILL_INP != OFF +double oscill_inp_mean; // nA, mean of sinusoidal oscillatory input to excitatory neurons +double oscill_inp_amp; // nA, amplitude of sinusoidal oscillatory input to excitatory neurons +#endif /*** Output parameters ***/ vector exc_neuron_output {}; vector inh_neuron_output {}; vector synapse_output {}; int output_period; // number of timesteps to pass for the next data output (if set to 1, most detailed output is obtained) -int net_output_period; // number of timesteps to pass for the next network plot -vector net_output {}; // vector of times for selected network plots +vector net_output {}; // vector of times selected for the output of network plots (mind the larger timesteps in FF mode!) #ifdef SEEK_I_0 double *seekic; // pointer to a variable to communicate with the main(...) function while seeking I_0 @@ -112,10 +119,52 @@ void saveParams(string str) f << "learning stimulus = STIP" << endl; #endif f << "recall stimulus = " << prot_recall << endl; +#if CORE_SHAPE == FIRST f << "core = first " << CORE_SIZE << " neurons" << endl; +#elif CORE_SHAPE == SECOND + f << "core = second " << CORE_SIZE << " neurons" << endl; +#elif CORE_SHAPE == OVERLAP10_2ND + f << "core = " << CORE_SIZE << " neurons, OVERLAP10_2ND" << endl; +#elif CORE_SHAPE == OVERLAP15_2ND + f << "core = " << CORE_SIZE << " neurons, OVERLAP15_2ND" << endl; +#elif CORE_SHAPE == OVERLAP20_2ND + f << "core = " << CORE_SIZE << " neurons, OVERLAP20_2ND" << endl; +#elif CORE_SHAPE == THIRD + f << "core = third " << CORE_SIZE << " neurons" << endl; +#elif CORE_SHAPE == OVERLAP10_3RD + f << "core = " << CORE_SIZE << " neurons, OVERLAP10_3RD" << endl; +#elif CORE_SHAPE == OVERLAP10_3RD_NO_ABC + f << "core = " << CORE_SIZE << " neurons, OVERLAP10_3RD_NO_ABC" << endl; +#elif CORE_SHAPE == OVERLAP10_3RD_NO_AC_NO_ABC + f << "core = " << CORE_SIZE << " neurons, OVERLAP10_3RD_NO_AC_NO_ABC" << endl; +#elif CORE_SHAPE == OVERLAP10_3RD_NO_BC_NO_ABC + f << "core = " << CORE_SIZE << " neurons, OVERLAP10_3RD_NO_BC_NO_ABC" << endl; +#elif CORE_SHAPE == OVERLAP15_3RD + f << "core = " << CORE_SIZE << " neurons, OVERLAP15_3RD" << endl; +#elif CORE_SHAPE == OVERLAP15_3RD_NO_ABC + f << "core = " << CORE_SIZE << " neurons, OVERLAP15_3RD_NO_ABC" << endl; +#elif CORE_SHAPE == OVERLAP15_3RD_NO_AC_NO_ABC + f << "core = " << CORE_SIZE << " neurons, OVERLAP15_3RD_NO_AC_NO_ABC" << endl; +#elif CORE_SHAPE == OVERLAP15_3RD_NO_BC_NO_ABC + f << "core = " << CORE_SIZE << " neurons, OVERLAP15_3RD_NO_BC_NO_ABC" << endl; +#elif CORE_SHAPE == OVERLAP20_3RD + f << "core = " << CORE_SIZE << " neurons, OVERLAP20_3RD" << endl; +#elif CORE_SHAPE == OVERLAP20_3RD_NO_ABC + f << "core = " << CORE_SIZE << " neurons, OVERLAP20_3RD_NO_ABC" << endl; +#elif CORE_SHAPE == OVERLAP20_3RD_NO_AC_NO_ABC + f << "core = " << CORE_SIZE << " neurons, OVERLAP20_3RD_NO_AC_NO_ABC" << endl; +#elif CORE_SHAPE == OVERLAP20_3RD_NO_BC_NO_ABC + f << "core = " << CORE_SIZE << " neurons, OVERLAP20_3RD_NO_BC_NO_ABC" << endl; +#elif CORE_SHAPE == RAND + f << "core = random " << CORE_SIZE << " neurons" << endl; +#endif f << "recall fraction = " << recall_fraction << endl; f << "N_stim = " << N_stim << endl; - + f << "osc. input = "; +#if OSCILL_INP != OFF + f << "(" << oscill_inp_mean << " +- " << oscill_inp_amp << ") nA at " << double(1./(OSCILL_INP*dt)) << " Hz"; +#endif + f << endl; net.saveNetworkParams(&f); f << endl; @@ -135,7 +184,7 @@ void addToParamsFile(string str) } /*** instFiringRates *** - * Computes the instantaneous firing rates of all the network neurons and prints them to a given data file * + * Computes the instantaneous firing rates of all the neurons in the network and prints them to a given data file * * - txt_net_tprime: pointer to the data file * * - jprime: timestep for which the firing rates shall be calculated */ void instFiringRates(ofstream* txt_net_tprime, int jprime) @@ -148,10 +197,10 @@ void instFiringRates(ofstream* txt_net_tprime, int jprime) else t_wfr_eff = t_wfr; - for (int m=0; m < pow2(Nl); m++) + for (int m=0; m < pow2(Nl_exc); m++) { int num_spikes = 0; // number of spikes in time window t_wfr_eff - bool removed = false; // specified if old, now irrelevant spikes have been removed + bool removed = false; // specified if old, now irrelevant spikes have been removed from RAM int sp = 1; int hist_size = net.getSpikeHistorySize(m); @@ -159,11 +208,12 @@ void instFiringRates(ofstream* txt_net_tprime, int jprime) { int spt = net.getSpikeTime(sp, m); - if (spt >= jprime-wfr/2.) + if (spt >= jprime-wfr/2.) // spikes after jprime-wfr/2. { - if (!removed) // is entered for first spike after jprime-wfr/2. - // for removal, (net_output_period + wfr/2) has to be larger than t_syn_delay_steps and t_Ca_delay_steps (which is usually the case) + if (!removed) { + // for removal not to alter the network dynamics, (jprime(t2) - jprime(t1) + wfr/2.) has to be larger + // than t_syn_delay_steps and t_Ca_delay_steps, which is the case for the default values) net.removeSpikes(1, sp-1, m); sp = 0; hist_size = net.getSpikeHistorySize(m); @@ -182,7 +232,7 @@ void instFiringRates(ofstream* txt_net_tprime, int jprime) double rate = num_spikes / t_wfr_eff; *txt_net_tprime << fixed << rate; - if ((m+1) % Nl != 0) // still in the same neuron row + if ((m+1) % Nl_exc != 0) // still in the same neuron row *txt_net_tprime << "\t\t"; else *txt_net_tprime << endl; // next neuron row begins @@ -249,6 +299,7 @@ int simulate(string working_dir, bool first_sim, string _purpose) const double h_0 = net.getInitialWeight(); w_stim = h_0; // initial synaptic weight; set coupling strength for stimulation const string separator = getSeparator(); cout << separator << endl; // string of characters for a separator in command line + net.setSpikeStorageTime(n + int(round(wfr/2.))); // reserve enough RAM for fast spike storage // Neuronal and synaptic output #ifdef TWO_NEURONS_ONE_SYNAPSE @@ -259,13 +310,13 @@ int simulate(string working_dir, bool first_sim, string _purpose) exc_neuron_output = vector {}; inh_neuron_output = vector {}; #else - if (Nl == 20) + if (Nl_exc == 20) { exc_neuron_output = vector {0,1,2}; inh_neuron_output = vector {400}; synapse_output = vector {synapse(0,1),synapse(0,50),synapse(0,100),synapse(400,0)}; } - else if (Nl == 35) + else if (Nl_exc == 35) { //exc_neuron_output = vector {608,609,610}; exc_neuron_output = vector {0,1,2}; @@ -273,7 +324,7 @@ int simulate(string working_dir, bool first_sim, string _purpose) //synapse_output = vector {synapse(608,609),synapse(609,608),synapse(609,610)}; synapse_output = vector {synapse(0,1),synapse(0,50),synapse(0,100)}; } - else if (Nl == 40) + else if (Nl_exc == 40) { //exc_neuron_output = vector {816,817,818}; //exc_neuron_output = vector {cNN(20,21),cNN(23,21),cNN(30,21)}; // three neurons: one "as", one "ans" and one "e" @@ -285,7 +336,7 @@ int simulate(string working_dir, bool first_sim, string _purpose) // synapse(660,940), synapse(660,941), synapse(782,1), synapse(782,53)}; synapse_output = vector {synapse(6,68)}; } - else if (Nl == 50) + else if (Nl_exc == 50) { exc_neuron_output = vector {1,640,1300}; inh_neuron_output = vector {2500}; @@ -352,7 +403,7 @@ int simulate(string working_dir, bool first_sim, string _purpose) #endif // Output with general information // if this is the first simulation run by the main(...) function (first_sim = true), use time stamp from NetworkMain.cpp, else, set a new time stamp - cout << "\x1b[33mNetwork simulation with N_exc = " << pow2(Nl) << ", N_inh = " << pow2(Nl_inh) + cout << "\x1b[33mNetwork simulation with N_exc = " << pow2(Nl_exc) << ", N_inh = " << pow2(Nl_inh) << ", t_max = " << t_max << " s (" << dateStr("", !first_sim) << ")\x1b[0m" << endl; cout << "Learning protocol: " << prot_learn << endl; cout << "Recall protocol: " << prot_recall << endl; @@ -399,9 +450,9 @@ int simulate(string working_dir, bool first_sim, string _purpose) writePalViridis(); // create palette file for gnuplot color plots // learning stimulation - Stimulus st_learn = createStimulusFromProtocols(prot_learn, "", dt, w_stim, N_stim, tau_syn); // create Stimulus object containing learning stimulation only + Stimulus st_learn = createStimulusFromProtocols(prot_learn, "", dt, w_stim, N_stim, tau_syn, &logf); // create Stimulus object containing learning stimulation only // recall stimulation - Stimulus st_recall = createStimulusFromProtocols("", prot_recall, dt, w_stim, N_stim, tau_syn); // create Stimulus object containing recall stimulation only + Stimulus st_recall = createStimulusFromProtocols("", prot_recall, dt, w_stim, N_stim, tau_syn, &logf); // create Stimulus object containing recall stimulation only #if STIPULATE_CA == ON Stimulus st_full = st_recall; // only stipulated cell assembly: recall stimulation is all (effective) stimulation there is @@ -409,7 +460,12 @@ int simulate(string working_dir, bool first_sim, string _purpose) Stimulus st_full = st_learn; // no network, no recall: just a dummy for st_full has to be defined #else // learning + recall stimulation - Stimulus st_full = createStimulusFromProtocols(prot_learn, prot_recall, dt, w_stim, N_stim, tau_syn); // create Stimulus object containing learning and recall stimulation + Stimulus st_full = createStimulusFromProtocols(prot_learn, prot_recall, dt, w_stim, N_stim, tau_syn, &logf); // create Stimulus object containing learning and recall stimulation +#endif +#if OSCILL_INP != OFF + Stimulus st_oscill = createOscillStimulus(dt, n, OSCILL_INP, oscill_inp_mean, oscill_inp_amp); // oscillatory input for excitatory population + net.setBlockStimulus(st_oscill, pow2(Nl_exc)); // to excitatory population + //net.setBlockStimulus(st_oscill, pow2(Nl_inh), pow2(Nl_exc)); // to inhibitory population #endif int tb_stim_start = st_learn.getStimulationStart(); // time bin in which all stimulation begins int tb_stim_end = st_full.getStimulationEnd(); // time bin in which all stimulation ends @@ -439,21 +495,18 @@ int simulate(string working_dir, bool first_sim, string _purpose) #endif #ifdef PLASTICITY_OVER_FREQ - Stimulus st_learn2 = createStimulusFromProtocols(prot_recall, "", dt, w_stim, N_stim, tau_syn); // create Stimulus object for stimulating second neuron + Stimulus st_learn2 = createStimulusFromProtocols(prot_recall, "", dt, w_stim, N_stim, tau_syn, &logf); // create Stimulus object for stimulating second neuron #endif double p = 0.0; // percentage of process completeness - int jprime = 0; // time step in which the last network plot data were collected + int jprime = 0; // timestep in which the last network plot data were collected #if PRINT_CONNECTIONS == ON net.printConnections(dateStr("_connections.txt")); #endif // Network output - if (net_output_period < 0) - net_output_period = n+1; // disable periodic network output - int stim_net_output_period = n+1; //tenth_sec; // number of timesteps per which network output is generated during stimulation int ten_secs_before_recall = int(round(28800./dt)); int ten_secs_after_recall = int(round(28820./dt)); int ten_mins_after_recall = int(round(29410./dt)); @@ -471,11 +524,7 @@ int simulate(string working_dir, bool first_sim, string _purpose) net_output[ci] = net_out_new; } } - int rich_comp_buffer; // buffer for rich computation - if (stim_net_output_period > n) - rich_comp_buffer = int(10./dt) + wfr/2.; - else - rich_comp_buffer = stim_net_output_period + wfr/2.; + int rich_comp_buffer = int(10./dt) + wfr/2.; // buffer for rich computation // ============================================================================================================================== // Check if files have been opened properly and if yes, set precision @@ -508,12 +557,12 @@ int simulate(string working_dir, bool first_sim, string _purpose) int total_c_count_exc_inh = 0; // total number of exc.->inh. connections int total_c_count_inh_inh = 0; // total number of inh.->inh. connections - for (int m=0; mE connectivity: " << total_c_count_exc_exc / double(pow2(Nl*Nl)-pow2(Nl)) * 100 << " % (expected: " << pc*100 << " %)" << endl; - cout << fixed << "I->E connectivity: " << total_c_count_inh_exc / double(pow2(Nl*Nl_inh)) * 100 << " % (expected: " << pc*100 << " %)" << endl; - cout << fixed << "E->I connectivity: " << total_c_count_exc_inh / double(pow2(Nl*Nl_inh)) * 100 << " % (expected: " << pc*100 << " %)" << endl; + cout << fixed << "E->E connectivity: " << total_c_count_exc_exc / double(pow2(Nl_exc*Nl_exc)-pow2(Nl_exc)) * 100 << " % (expected: " << pc*100 << " %)" << endl; + cout << fixed << "I->E connectivity: " << total_c_count_inh_exc / double(pow2(Nl_exc*Nl_inh)) * 100 << " % (expected: " << pc*100 << " %)" << endl; + cout << fixed << "E->I connectivity: " << total_c_count_exc_inh / double(pow2(Nl_exc*Nl_inh)) * 100 << " % (expected: " << pc*100 << " %)" << endl; cout << fixed << "I->I connectivity: " << total_c_count_inh_inh / double(pow2(Nl_inh*Nl_inh)-pow2(Nl_inh)) * 100 << " % (expected: " << pc*100 << " %)" << endl; // ============================================================================================================================== @@ -632,14 +681,119 @@ int simulate(string working_dir, bool first_sim, string _purpose) net.setSingleNeuronStimulus(0, st_learn2); #endif -#elif defined INTERMEDIATE_RECALL +#else + + #if defined MEMORY_CONSOLIDATION_P1 + // stimulation of the first block of CORE_SIZE neurons + #ifdef INTERMEDIATE_RECALL_P1 if (tb_start == 0) // real learning stimulus - net.setBlockStimulus(st_learn, CORE_SIZE); // learning stimulus for the first CORE_SIZE neurons + net.setBlockStimulus(st_learn, CORE_SIZE); // learning stimulus else // "fake learning stimulus" used for intermediate recall - net.setRandomStimulus(st_learn, int(round(recall_fraction*CORE_SIZE)), &logf, 0, CORE_SIZE); // intermediate recall stimulus for randomly selected recall_fraction*CORE_SIZE neurons - // "real recall stimulus" is assigned right before it starts, to prevent interference -#else - net.setBlockStimulus(st_learn, CORE_SIZE); // learning stimulus for the first CORE_SIZE neurons + net.setRandomStimulus(st_learn, int(round(recall_fraction*CORE_SIZE)), &logf, 0, CORE_SIZE); // intermediate recall stimulus + // real recall stimulus is assigned right before it starts, to avoid interference + #else + net.setBlockStimulus(st_learn, CORE_SIZE); // these neurons receive the learning stimulus + net.setBlockStimulus(st_full, int(round(recall_fraction*CORE_SIZE))); // these (non-random) neurons receive learning and recall stimulus + #endif + #elif CORE_SHAPE == FIRST + // stimulation of the first block of CORE_SIZE neurons + net.setBlockStimulus(st_learn, CORE_SIZE); // these neurons receive the learning stimulus + net.setRandomStimulus(st_full, int(round(recall_fraction*CORE_SIZE)), &logf, 0, CORE_SIZE); // these (random) neurons receive learning and recall stimulus + #elif CORE_SHAPE == SECOND + net.setBlockStimulus(st_learn, CORE_SIZE, CORE_SIZE); // stimulation of the second block of CORE_SIZE neurons + net.setRandomStimulus(st_full, int(round(recall_fraction*CORE_SIZE)), &logf, CORE_SIZE, 2*CORE_SIZE); + #elif CORE_SHAPE == OVERLAP10_2ND // stimulation of a block of CORE_SIZE neurons, overlapping with the first block by 10% + net.setBlockStimulus(st_learn, CORE_SIZE, int(round(0.9*CORE_SIZE))); + //net.setBlockStimulus(st_full, int(round(recall_fraction*CORE_SIZE)), int(round(0.9*CORE_SIZE))); + net.setRandomStimulus(st_full, int(round(recall_fraction*CORE_SIZE)), &logf, int(round(0.9*CORE_SIZE)), int(round(0.9*CORE_SIZE + CORE_SIZE))); + #elif CORE_SHAPE == OVERLAP15_2ND + net.setBlockStimulus(st_learn, CORE_SIZE, int(round(0.85*CORE_SIZE))); // stimulation of a block of CORE_SIZE neurons, overlapping with the first block by 15% + //net.setBlockStimulus(st_full, int(round(recall_fraction*CORE_SIZE)), int(round(0.9*CORE_SIZE))); + net.setRandomStimulus(st_full, int(round(recall_fraction*CORE_SIZE)), &logf, int(round(0.85*CORE_SIZE)), int(round(0.85*CORE_SIZE + CORE_SIZE))); + #elif CORE_SHAPE == OVERLAP20_2ND + net.setBlockStimulus(st_learn, CORE_SIZE, int(round(0.8*CORE_SIZE))); // stimulation of a block of CORE_SIZE neurons, overlapping with the first block by 20% + //net.setBlockStimulus(st_full, int(round(recall_fraction*CORE_SIZE)), int(round(0.8*CORE_SIZE))); + net.setRandomStimulus(st_full, int(round(recall_fraction*CORE_SIZE)), &logf, int(round(0.8*CORE_SIZE)), int(round(0.8*CORE_SIZE + CORE_SIZE))); + #elif CORE_SHAPE == THIRD + net.setBlockStimulus(st_learn, CORE_SIZE, 2*CORE_SIZE); // stimulation of the third block of CORE_SIZE neurons + net.setRandomStimulus(st_full, int(round(recall_fraction*CORE_SIZE)), &logf, 2*CORE_SIZE, 3*CORE_SIZE); + #elif CORE_SHAPE == OVERLAP10_3RD + net.setBlockStimulus(st_learn, int(round(0.9*CORE_SIZE)), int(round(1.85*CORE_SIZE))); // 90% of the assembly for disjoint set and intersection with second CA only + net.setBlockStimulus(st_learn, int(round(0.05*CORE_SIZE))); // 5% of the assembly for intersection with first CA only + net.setBlockStimulus(st_learn, int(round(0.05*CORE_SIZE)), int(round(0.9*CORE_SIZE))); // 5% of the assembly for intersection with both first and second CA + + net.setRandomStimulus(st_full, int(round(recall_fraction*0.9*CORE_SIZE)), &logf, int(round(1.85*CORE_SIZE)), int(round(1.85*CORE_SIZE+0.9*CORE_SIZE))); + net.setRandomStimulus(st_full, int(round(recall_fraction*0.05*CORE_SIZE)), &logf, 0, int(round(0.05*CORE_SIZE))); + net.setRandomStimulus(st_full, int(round(recall_fraction*0.05*CORE_SIZE)), &logf, int(round(0.9*CORE_SIZE)), int(round(0.9*CORE_SIZE+0.05*CORE_SIZE))); + #elif CORE_SHAPE == OVERLAP10_3RD_NO_ABC + net.setBlockStimulus(st_learn, int(round(0.9*CORE_SIZE)), int(round(1.8*CORE_SIZE))); // 90% of the assembly for disjoint set and intersection with second CA only + net.setBlockStimulus(st_learn, int(round(0.1*CORE_SIZE))); // 10% of the assembly for intersection with first CA only + + net.setRandomStimulus(st_full, int(round(recall_fraction*0.9*CORE_SIZE)), &logf, int(round(1.8*CORE_SIZE)), int(round(1.8*CORE_SIZE+0.9*CORE_SIZE))); + net.setRandomStimulus(st_full, int(round(recall_fraction*0.1*CORE_SIZE)), &logf, 0, int(round(0.1*CORE_SIZE))); + #elif CORE_SHAPE == OVERLAP10_3RD_NO_AC_NO_ABC + net.setBlockStimulus(st_learn, CORE_SIZE, int(round(1.8*CORE_SIZE))); // whole assembly for disjoint set and intersection with second CA only + + net.setRandomStimulus(st_full, int(round(recall_fraction*CORE_SIZE)), &logf, int(round(1.8*CORE_SIZE)), int(round(1.8*CORE_SIZE+CORE_SIZE))); + #elif CORE_SHAPE == OVERLAP10_3RD_NO_BC_NO_ABC + net.setBlockStimulus(st_learn, int(round(0.9*CORE_SIZE)), int(round(1.9*CORE_SIZE))); // 90% of the assembly for disjoint set and intersection with second CA only + net.setBlockStimulus(st_learn, int(round(0.1*CORE_SIZE))); // 10% of the assembly for intersection with first CA only + + net.setRandomStimulus(st_full, int(round(recall_fraction*0.9*CORE_SIZE)), &logf, int(round(1.9*CORE_SIZE)), int(round(1.9*CORE_SIZE+0.9*CORE_SIZE))); + net.setRandomStimulus(st_full, int(round(recall_fraction*0.1*CORE_SIZE)), &logf, 0, int(round(0.1*CORE_SIZE))); + #elif CORE_SHAPE == OVERLAP15_3RD + net.setBlockStimulus(st_learn, int(round(0.85*CORE_SIZE)), int(round(1.775*CORE_SIZE))); // 85% of the assembly for disjoint set and intersection with second CA only + net.setBlockStimulus(st_learn, int(round(0.075*CORE_SIZE))); // 7.5% of the assembly for intersection with first CA only + net.setBlockStimulus(st_learn, int(round(0.075*CORE_SIZE)), int(round(0.85*CORE_SIZE))); // 7.5% of the assembly for intersection with both first and second CA + + net.setRandomStimulus(st_full, int(round(recall_fraction*0.85*CORE_SIZE)), &logf, int(round(1.775*CORE_SIZE)), int(round(1.775*CORE_SIZE+0.85*CORE_SIZE))); + net.setRandomStimulus(st_full, int(round(recall_fraction*0.075*CORE_SIZE)), &logf, 0, int(round(0.075*CORE_SIZE))); + net.setRandomStimulus(st_full, int(round(recall_fraction*0.075*CORE_SIZE)), &logf, int(round(0.85*CORE_SIZE)), int(round(0.85*CORE_SIZE+0.075*CORE_SIZE))); + #elif CORE_SHAPE == OVERLAP15_3RD_NO_ABC + net.setBlockStimulus(st_learn, int(round(0.85*CORE_SIZE)), int(round(1.7*CORE_SIZE))); // 85% of the assembly for disjoint set and intersection with second CA only + net.setBlockStimulus(st_learn, int(round(0.15*CORE_SIZE))); // 15% of the assembly for intersection with first CA only + + net.setRandomStimulus(st_full, int(round(recall_fraction*0.85*CORE_SIZE)), &logf, int(round(1.7*CORE_SIZE)), int(round(1.7*CORE_SIZE+0.85*CORE_SIZE))); + net.setRandomStimulus(st_full, int(round(recall_fraction*0.15*CORE_SIZE)), &logf, 0, int(round(0.15*CORE_SIZE))); + #elif CORE_SHAPE == OVERLAP15_3RD_NO_AC_NO_ABC + net.setBlockStimulus(st_learn, CORE_SIZE, int(round(1.7*CORE_SIZE))); // whole the assembly for disjoint set and intersection with second CA only + + net.setRandomStimulus(st_full, int(round(recall_fraction*CORE_SIZE)), &logf, int(round(1.7*CORE_SIZE)), int(round(1.7*CORE_SIZE+CORE_SIZE))); + #elif CORE_SHAPE == OVERLAP15_3RD_NO_BC_NO_ABC + net.setBlockStimulus(st_learn, int(round(0.85*CORE_SIZE)), int(round(1.85*CORE_SIZE))); // 85% of the assembly for disjoint set and intersection with second CA only + net.setBlockStimulus(st_learn, int(round(0.15*CORE_SIZE))); // 15% of the assembly for intersection with first CA only + + net.setRandomStimulus(st_full, int(round(recall_fraction*0.85*CORE_SIZE)), &logf, int(round(1.85*CORE_SIZE)), int(round(1.7*CORE_SIZE+0.85*CORE_SIZE))); + net.setRandomStimulus(st_full, int(round(recall_fraction*0.15*CORE_SIZE)), &logf, 0, int(round(0.15*CORE_SIZE))); + #elif CORE_SHAPE == OVERLAP20_3RD + net.setBlockStimulus(st_learn, int(round(0.8*CORE_SIZE)), int(round(1.7*CORE_SIZE))); // 80% of the assembly for disjoint set and intersection with second CA only + net.setBlockStimulus(st_learn, int(round(0.1*CORE_SIZE))); // 10% of the assembly for intersection with first CA only + net.setBlockStimulus(st_learn, int(round(0.1*CORE_SIZE)), int(round(0.8*CORE_SIZE))); // 10% of the assembly for intersection with both first and second CA + + net.setRandomStimulus(st_full, int(round(recall_fraction*0.8*CORE_SIZE)), &logf, int(round(1.7*CORE_SIZE)), int(round(1.7*CORE_SIZE+0.8*CORE_SIZE))); + net.setRandomStimulus(st_full, int(round(recall_fraction*0.1*CORE_SIZE)), &logf, 0, int(round(0.1*CORE_SIZE))); + net.setRandomStimulus(st_full, int(round(recall_fraction*0.1*CORE_SIZE)), &logf, int(round(0.8*CORE_SIZE)), int(round(0.8*CORE_SIZE+0.1*CORE_SIZE))); + #elif CORE_SHAPE == OVERLAP20_3RD_NO_ABC + net.setBlockStimulus(st_learn, int(round(0.8*CORE_SIZE)), int(round(1.6*CORE_SIZE))); // 80% of the assembly for disjoint set and intersection with second CA only + net.setBlockStimulus(st_learn, int(round(0.2*CORE_SIZE))); // 20% of the assembly for intersection with first CA only + + net.setRandomStimulus(st_full, int(round(recall_fraction*0.8*CORE_SIZE)), &logf, int(round(1.6*CORE_SIZE)), int(round(1.6*CORE_SIZE+0.8*CORE_SIZE))); + net.setRandomStimulus(st_full, int(round(recall_fraction*0.2*CORE_SIZE)), &logf, 0, int(round(0.2*CORE_SIZE))); + #elif CORE_SHAPE == OVERLAP20_3RD_NO_AC_NO_ABC + net.setBlockStimulus(st_learn, CORE_SIZE, int(round(1.6*CORE_SIZE))); // whole assembly for disjoint set and intersection with second CA only + + net.setRandomStimulus(st_full, int(round(recall_fraction*CORE_SIZE)), &logf, int(round(1.6*CORE_SIZE)), int(round(1.6*CORE_SIZE+CORE_SIZE))); + #elif CORE_SHAPE == OVERLAP20_3RD_NO_BC_NO_ABC + net.setBlockStimulus(st_learn, int(round(0.8*CORE_SIZE)), int(round(1.8*CORE_SIZE))); // 80% of the assembly for disjoint set + net.setBlockStimulus(st_learn, int(round(0.2*CORE_SIZE))); // 20% of the assembly for intersection with first CA only + + net.setRandomStimulus(st_full, int(round(recall_fraction*0.8*CORE_SIZE)), &logf, int(round(1.8*CORE_SIZE)), int(round(1.8*CORE_SIZE+0.8*CORE_SIZE))); + net.setRandomStimulus(st_full, int(round(recall_fraction*0.2*CORE_SIZE)), &logf, 0, int(round(0.2*CORE_SIZE))); + #elif CORE_SHAPE == RAND + net.setRandomStimulus(st_learn, CORE_SIZE, &logf); // stimulation of random CORE_SIZE neurons + net.setRandomStimulus(st_full, int(round(recall_fraction*CORE_SIZE)), &logf); + #endif + #endif // Plot stimulation //st_learn.plotAll("learning_stimulation"); @@ -649,27 +803,40 @@ int simulate(string working_dir, bool first_sim, string _purpose) if (j == tb_stim_start-1) // one step before learning stimulus { -#ifndef INTERMEDIATE_RECALL +#if defined MEMORY_CONSOLIDATION_P1 && !defined INTERMEDIATE_RECALL_P1 net.resetPlasticity(true, true, true, true); // set all plastic changes to zero (reset changes from control stimulus for instance) //net.reset(); // reset network (only for the case that a control recall was used before) #endif #if STIPULATE_CA == ON + #if CORE_SHAPE == FIRST net.stipulateFirstNeuronsAssembly(CORE_SIZE); // stipulate assembly weights + #endif #endif } if (j == tb_recall_start-1) // one step before recall stimulus { -#if SAVE_NET_STATE == ON +#if defined MEMORY_CONSOLIDATION_P1 && SAVE_NET_STATE == ON net.saveNetworkState("saved_state.txt", j); if (!copyFile("saved_state.txt", "../saved_state0.txt")) throw runtime_error(string("Network state file could not be copied to upper directory.")); else remove("saved_state.txt"); #endif - net.setBlockStimulus(st_recall, int(round(recall_fraction*CORE_SIZE))); // recall stimulation for the first recall_fraction*CORE_SIZE neurons + +#ifdef INTERMEDIATE_RECALL_P1 + net.setBlockStimulus(st_recall, int(round(recall_fraction*CORE_SIZE))); // real recall, stimulate first recall_fraction*CORE_SIZE neurons (again) +#endif + } + + if (j == n) // very last timestep + { +#if !defined MEMORY_CONSOLIDATION_P1 && SAVE_NET_STATE == ON + net.saveNetworkState("saved_state.txt", j); +#endif } + // entering fast-forward mode #if FF_AFTER_LEARN == ON || FF_AFTER_STIM == ON || FF_AFTER_NETLOAD == ON if ( #if FF_AFTER_LEARN == ON @@ -711,7 +878,8 @@ int simulate(string working_dir, bool first_sim, string _purpose) // in fast-forward mode (only computation of late-phase dynamics) else if (compmode == 2) { - double delta_t = 50.; // if completely numerical [OBSOLETE]: use timesteps of not more than 1 min - that is small enough to not visibly cut off the peak of the protein curve + double delta_t = 50.; // s, duration of one fast-forward timestep (for default parameters, use timesteps of not more than 1 min + // - that is small enough to not cut off the peak of the protein curve) int new_j = int(round(floor(j*dt + delta_t)/dt)); // use floor function to ensure that FF steps end with full seconds if (new_j > tb_stim_start-1 && j < tb_stim_start-1) // step in which learning stimulus begins @@ -763,7 +931,7 @@ int simulate(string working_dir, bool first_sim, string _purpose) ///////////////////////////////////// OUTPUT FOR PLOTS /////////////////////////////////////////////////////////// ////////////////////////////////////////////////////////////////////////////////////////////////////////////////// - if (j % output_period == 0 || compmode == 2) // use steps in intervals of output_period + if (j % output_period == 0 || compmode == 2) // use steps in intervals of output_period (and in intervals of delta_t in fast-forward mode) { #if SPIKE_PLOTTING == NUMBER || SPIKE_PLOTTING == NUMBER_AND_RASTER @@ -825,7 +993,7 @@ int simulate(string working_dir, bool first_sim, string _purpose) double early_mean = net.getMeanEarlySynapticStrength(CORE_SIZE); double late_mean = net.getMeanLateSynapticStrength(CORE_SIZE); double prot_mean = net.getMeanCProteinAmount(CORE_SIZE); - int size_control = pow2(Nl) - CORE_SIZE; + int size_control = pow2(Nl_exc) - CORE_SIZE; txt_mean << early_mean << "\t\t" << net.getSDEarlySynapticStrength(early_mean, CORE_SIZE) << "\t\t" << late_mean << "\t\t" << net.getSDLateSynapticStrength(late_mean, CORE_SIZE) << "\t\t" @@ -843,12 +1011,8 @@ int simulate(string working_dir, bool first_sim, string _purpose) txt_mean << "\r\n"; - // Output of network plots (usually every net_output_period; under stimulation every stim_net_output_period) and clearing of spike time list - if ( (j % net_output_period == 0) - || ((j % stim_net_output_period == 0) && st_full.stimExists(j-stim_net_output_period)) // creates network plots every stim_net_output_period-th timestep - // until stim_net_output_period after the end of stimulation - || (netOutput(j)) // creates network plots at specified times - /*|| (compmode == 2) */) // if in fast-forward computation mode [OBSOLETE: net_output_period should now be chosen to be a multiple of the FF mode timestep] + // Output of network plots (and, conditionally, clearing of spike time vectors) + if (netOutput(j)) // network plots at specified times (defined by net_output, mind the larger timesteps in FF mode!) { if (j < (n - wfr/2.)) // if there are still enough timesteps left to compute the firing rate @@ -862,13 +1026,13 @@ int simulate(string working_dir, bool first_sim, string _purpose) // Output of early-phase matrix at time 't' - for (int m=0; m < pow2(Nl); m++) + for (int m=0; m < pow2(Nl_exc); m++) { - for (int n=0; n < pow2(Nl); n++) + for (int n=0; n < pow2(Nl_exc); n++) { *txt_net_t << fixed << net.getEarlySynapticStrength(synapse(m,n)); - if (n < pow2(Nl) - 1) + if (n < pow2(Nl_exc) - 1) *txt_net_t << "\t\t"; } *txt_net_t << endl; // next neuron row begins @@ -876,13 +1040,13 @@ int simulate(string working_dir, bool first_sim, string _purpose) *txt_net_t << endl; // Output of late-phase matrix at time 't' - for (int m=0; m < pow2(Nl); m++) + for (int m=0; m < pow2(Nl_exc); m++) { - for (int n=0; n < pow2(Nl); n++) + for (int n=0; n < pow2(Nl_exc); n++) { *txt_net_t << fixed << net.getLateSynapticStrength(synapse(m,n)); - if (n < pow2(Nl) - 1) + if (n < pow2(Nl_exc) - 1) *txt_net_t << "\t\t"; } *txt_net_t << endl; // next neuron row begins @@ -892,14 +1056,14 @@ int simulate(string working_dir, bool first_sim, string _purpose) // Output of firing rate matrix at (past) time 'tprime' and creating plot if (data_to_plot) { - instFiringRates(txt_net_tprime, jprime); - createNetworkPlotAveragedWeights(jprime*dt, h_0, Nl, z_max); + instFiringRates(txt_net_tprime, jprime); // computes firing rates and does clearing of the spike time vectors + createNetworkPlotAveragedWeights(jprime*dt, h_0, Nl_exc, z_max); if (jprime == 0) - createNetworkPlotWeights(jprime*dt, h_0, Nl, z_max); + createNetworkPlotWeights(jprime*dt, h_0, Nl_exc, z_max); } txt_net_tprime = txt_net_t; // save file handle for next plot step - jprime = j; // save time step for next plot step + jprime = j; // save timestep for next plot step data_to_plot = true; } } @@ -915,7 +1079,7 @@ int simulate(string working_dir, bool first_sim, string _purpose) if (data_to_plot) { instFiringRates(txt_net_tprime, jprime); - createNetworkPlotAveragedWeights(jprime*dt, h_0, Nl, z_max); + createNetworkPlotAveragedWeights(jprime*dt, h_0, Nl_exc, z_max); } #endif @@ -931,11 +1095,11 @@ int simulate(string working_dir, bool first_sim, string _purpose) int sdfr_part1_inh = 0; // part 1 of the standard deviation (due to Steiner's translation theorem) - integer because it contains only spike counts int sdfr_part2_inh = 0; // part 2 of the standard deviation (due to Steiner's translation theorem) - integer because it contains only spike counts - for (int m=0; m 0) mfr_inh /= (pow2(Nl_inh) * t_max); // compute the mean firing rate of the inh. population from the number of spikes - sdfr = sqrt( ( double(sdfr_part1) - double(pow2(sdfr_part2)) / pow2(Nl) ) / - double(pow2(Nl)-1) ) / t_max; // compute standard deviation of the firing rate according to Steiner's translation // deviation of the firing rates according to Steiner's translation theorem + sdfr = sqrt( ( double(sdfr_part1) - double(pow2(sdfr_part2)) / pow2(Nl_exc) ) / + double(pow2(Nl_exc)-1) ) / t_max; // compute standard deviation of the firing rate according to Steiner's translation // deviation of the firing rates according to Steiner's translation theorem if (Nl_inh > 0) sdfr_inh = sqrt( ( double(sdfr_part1_inh) - double(pow2(sdfr_part2_inh)) / pow2(Nl_inh) ) / double(pow2(Nl_inh)-1) ) / t_max; // compute standard deviation of the firing rate according to Steiner's translation // theorem @@ -973,11 +1137,11 @@ int simulate(string working_dir, bool first_sim, string _purpose) // Create mean firing rate map plots // Write data files and determine minimum and maximum firing rates - for (int m=0; m < pow2(Nl)+pow2(Nl_inh); m++) + for (int m=0; m < pow2(Nl_exc)+pow2(Nl_inh); m++) { double fr = double(net.getSpikeCount(m)) / t_max; - if (m < pow2(Nl)) // neuron m is in excitatory population + if (m < pow2(Nl_exc)) // neuron m is in excitatory population { if (fr > max_firing_rate) max_firing_rate = fr; @@ -986,7 +1150,7 @@ int simulate(string working_dir, bool first_sim, string _purpose) txt_fr << fixed << col(m) << "\t\t" << row(m) << "\t\t" << fr << endl; - if ((m+1) % Nl == 0) + if ((m+1) % Nl_exc == 0) txt_fr << endl; // another free line } else // neuron m is in inhibitory population @@ -996,7 +1160,7 @@ int simulate(string working_dir, bool first_sim, string _purpose) if (fr < min_firing_rate_inh) min_firing_rate_inh = fr; - int m_eff = m - pow2(Nl); + int m_eff = m - pow2(Nl_exc); txt_fr_inh << fixed << colG(m_eff, Nl_inh) << "\t\t" << rowG(m_eff, Nl_inh) << "\t\t" << fr << endl; if ((m_eff+1) % Nl_inh == 0) @@ -1009,7 +1173,7 @@ int simulate(string working_dir, bool first_sim, string _purpose) txt_fr_inh.close(); // Create firing rate plot of excitatory population - createNetworkColorPlot(gpl_fr, Nl, -1, 3, "fr_exc", "", true, "{/Symbol n} / Hz", min_firing_rate, max_firing_rate); + createNetworkColorPlot(gpl_fr, Nl_exc, -1, 3, "fr_exc", "", true, "{/Symbol n} / Hz", min_firing_rate, max_firing_rate); gplscript << "gnuplot fr_exc_map.gpl" << endl; // Create firing rate plot of inhibitory population @@ -1028,11 +1192,11 @@ int simulate(string working_dir, bool first_sim, string _purpose) txt_spike_raster.close(); //if (tb_stim_end > tb_stim_start) - // createSpikeRasterPlot(gplscript, 0.9*tb_stim_start*dt, 1.1*tb_stim_end*dt, pow2(Nl), pow2(Nl_inh)); + // createSpikeRasterPlot(gplscript, 0.9*tb_stim_start*dt, 1.1*tb_stim_end*dt, pow2(Nl_exc), pow2(Nl_inh)); //else - // createSpikeRasterPlot(gplscript, 0., t_max, pow2(Nl), pow2(Nl_inh), recall_fraction*CORE_SIZE, CORE_SIZE); + // createSpikeRasterPlot(gplscript, 0., t_max, pow2(Nl_exc), pow2(Nl_inh), recall_fraction*CORE_SIZE, CORE_SIZE); - createSpikeRasterPlot(gplscript, (t_max > 28809.99 ? 28809.99 : 0.), (t_max > 28810.20 ? 28810.20 : t_max), pow2(Nl), pow2(Nl_inh), recall_fraction*CORE_SIZE, CORE_SIZE); + createSpikeRasterPlot(gplscript, (100. < t_max ? 100. : 0.), (120. < t_max ? 120. : t_max), pow2(Nl_exc), pow2(Nl_inh), recall_fraction*CORE_SIZE, CORE_SIZE); #endif // ============================================================================================================================== @@ -1113,15 +1277,19 @@ void setSeekICVar(double *_seekic) /*** setParams *** * Sets the simulation parameters on given values and resets network(s) */ void setParams(double _I_0, double _sigma_WN, double _tau_syn, double _w_ee, double _w_ei, double _w_ie, double _w_ii, - string _prot_learn, string _prot_recall, int _output_period, int _net_output_period, int _N_stim, double _theta_p, double _theta_d, + double _oscill_inp_mean, double _oscill_inp_amp, + string _prot_learn, string _prot_recall, int _output_period, int _N_stim, double _theta_p, double _theta_d, double _Ca_pre, double _Ca_post, double _theta_pro_P, double _theta_pro_C, double _theta_pro_D, double _recall_fraction) { +#if OSCILL_INP != OFF + oscill_inp_mean = _oscill_inp_mean; + oscill_inp_amp = _oscill_inp_amp; +#endif tau_syn = _tau_syn; prot_learn = _prot_learn; prot_recall = _prot_recall; recall_fraction = _recall_fraction; output_period = _output_period; - net_output_period = _net_output_period; N_stim = _N_stim; net.setConstCurrent(_I_0); net.setSigma(_sigma_WN); @@ -1132,17 +1300,16 @@ void setParams(double _I_0, double _sigma_WN, double _tau_syn, double _w_ee, dou net.setCouplingStrengths(_w_ee, _w_ei, _w_ie, _w_ii); net.setCaConstants(_theta_p, _theta_d, _Ca_pre, _Ca_post); net.setPSThresholds(_theta_pro_P, _theta_pro_C, _theta_pro_D); - net.setSpikeStorageTime(net_output_period + int(round(wfr/2.))); net.reset(); } /*** Constructor *** * Sets all parameters on given values and calls constructors for Neuron instances */ -NetworkSimulation(int _Nl, int _Nl_inh, double _dt, double _t_max, +NetworkSimulation(int _Nl_exc, int _Nl_inh, double _dt, double _t_max, double _pc, double _sigma_plasticity, double _z_max, double _t_wfr, bool _ff_enabled) - : Nl(_Nl), Nl_inh(_Nl_inh), dt(_dt), t_max(_t_max), pc(_pc), + : Nl_exc(_Nl_exc), Nl_inh(_Nl_inh), dt(_dt), t_max(_t_max), pc(_pc), t_wfr(_t_wfr), wfr(_t_wfr/dt), ff_enabled(_ff_enabled), z_max(_z_max), - net(_dt, _Nl, _Nl_inh, _pc, _sigma_plasticity, _z_max) + net(_dt, _Nl_exc, _Nl_inh, _pc, _sigma_plasticity, _z_max) { } diff --git a/simulation-code/Neuron.cpp b/simulation-code/Neuron.cpp index f4fef44..56ae26d 100755 --- a/simulation-code/Neuron.cpp +++ b/simulation-code/Neuron.cpp @@ -30,7 +30,7 @@ friend class boost::serialization::access; private: /*** Computational parameters ***/ -double dt; // s, one time step for numerical simulation +double dt; // s, one timestep for numerical simulation /*** State variables ***/ double V; // mV, the current membrane potential at the soma @@ -44,7 +44,7 @@ double dendr_inp_integral; // nC, the integral over the charge deposited in the vector dendr_inp_history; // nC, vector containing the PSC amplitudes of the last 2 ms double dendr_int_window; // s, dendritic integration window double refractory_dendr; // s, time span until refractory period for dendritic spikes is over -double t_ref_dendr; // s, absolute refractory period for dendritic spikes - has to be at least one time step! +double t_ref_dendr; // s, absolute refractory period for dendritic spikes - has to be at least one timestep! double dendr_spike_threshold; // nC, threshold that dendr_inp_integral has to cross for a dendritic spike to be evoked double I_dendr_A_amp; // nC, amplitude of first current component of dendritic spikes double I_dendr_B_amp; // nC, amplitude of second current component of dendritic spikes @@ -61,8 +61,8 @@ double exp2; // mV, the slow component of the adaptive voltage threshold double p_P; // the protein amount for LTP in this neuron double p_C; // the common protein amount for LTP and LTP in this neuron double p_D; // the protein amount for LTD in this neuron -double I_cst; // nA, the externally applied stimulus current -double I_ext; // nA, the current evoked by external synaptic inputs (computed using an OU process with mean 0) +double I_stim; // nA, the externally applied stimulus current +double I_bg; // nA, the external background noise current (computed using Gaussian noise or an OU process with mean 0) #if COND_BASED_SYN == ON double I_int_exc; // nA, the synaptic input from excitatory network neurons affecting this neuron double I_int_inh; // nA, the synaptic input from inhibitory network neurons affecting this neuron @@ -73,7 +73,7 @@ double I_int; // nA, the synaptic input from other network neurons affecting thi double refractory; // s, time span until absolute refractory period is over bool active; // specifies if neuron is currently spiking int spike_count; // the total number of spikes that occurred since the last reset -vector spike_history; // vector of all spike times (in units of time steps) in the process since the last reset +vector spike_history; // vector of all spike times (in units of timesteps) in the process since the last reset int spike_history_reserve; // the maximum number of spikes int inh_incoming; // number of incoming inhibitory connections in a network int exc_incoming; // number of incoming excitatory connections in a network @@ -94,7 +94,7 @@ double tau_mem; // s, the membrane time constant double R_mem; // MΩ, resistance of the cell membrane double V_rev; // mV, the reversal potential of the neuron double V_reset; // mV, the reset potential of the neuron -double t_ref; // s, absolute refractory period - has to be at least one time step! +double t_ref; // s, absolute refractory period - has to be at least one timestep! #if NEURON_MODEL == LIF double V_th; // mV, the threshold potential of the neuron double V_spike; // mV, the height of an action potential @@ -199,8 +199,8 @@ template void serialize(Archive &ar, const unsigned int version) ar & p_P; ar & p_C; ar & p_D; - ar & I_cst; - ar & I_ext; + ar & I_stim; + ar & I_bg; #if COND_BASED_SYN == ON ar & I_int_exc; ar & I_int_inh; @@ -308,7 +308,7 @@ double getThreshold() const * - return: the instantaneous current in nA */ double getCurrent() const { - return I_cst+I_ext+I_int; + return I_stim+I_bg+I_int; } /*** getStimulusCurrent *** @@ -316,15 +316,15 @@ double getCurrent() const * - return: the instantaneous current stimulus in nA */ double getStimulusCurrent() const { - return I_cst; + return I_stim; } -/*** getFluctCurrent *** - * Returns current external current accounting for external inputs * - * - return: the instantaneous fluctuating external synaptic current in nA */ -double getFluctCurrent() const +/*** getBGCurrent *** + * Returns current external background current accounting for external inputs * + * - return: the instantaneous external background current in nA */ +double getBGCurrent() const { - return I_ext; + return I_bg; } /*** getConstCurrent *** @@ -344,7 +344,7 @@ double getSigma() const } /*** getSynapticCurrent *** - * Returns the internal synaptic current that arrived in the previous time step * + * Returns the internal synaptic current that arrived in the previous timestep * * - return: the synaptic current in nA */ double getSynapticCurrent() const { @@ -361,7 +361,7 @@ void setSynapticCurrent(const double _I_int) #if COND_BASED_SYN == ON /*** getExcSynapticCurrent *** - * Returns the internal excitatory synaptic conductance of the previous time step * + * Returns the internal excitatory synaptic conductance of the previous timestep * * - return: the excitatory synaptic conductance in nS */ double getExcSynapticCurrent() const { @@ -369,7 +369,7 @@ double getExcSynapticCurrent() const } /*** getInhSynapticCurrent *** - * Returns the internal inhibitory synaptic conductance of the previous time step * + * Returns the internal inhibitory synaptic conductance of the previous timestep * * - return: the inhibitory synaptic conductance in nS */ double getInhSynapticCurrent() const { @@ -429,7 +429,7 @@ void updateDendriteInput(const double psc_amplitude) } /*** getDendriticCurrent *** - * Returns the current that dendritic spiking caused in the previous time step * + * Returns the current that dendritic spiking caused in the previous timestep * * - return: the synaptic current in nA */ double getDendriticCurrent() const { @@ -469,7 +469,7 @@ int getSpikeTime(int n) const } /*** spikeAt *** - * Returns whether or not a spike has occurred at a given time step, begins searching * + * Returns whether or not a spike has occurred at a given timestep, begins searching * * from latest spike * * - int t_step: the time bin at which the spike should have occurred * - return: true if a spike occurred, false if not */ @@ -542,12 +542,12 @@ int getSpikeHistorySize() const } /*** processTimeStep *** - * Processes one time step (of duration delta_t) for the neuron * - * - int tb_step: time step at which to evaluate stimulus (< 0 before stimulus onset) * - * - int tb_init: initial time step for simple decay process (should be positive only in decaying state!) */ + * Processes one timestep (of duration delta_t) for the neuron * + * - int tb_step: timestep at which to evaluate stimulus (< 0 before stimulus onset) * + * - int tb_init: initial timestep for simple decay process (should be positive only in decaying state!) */ void processTimeStep(int tb_step, int tb_init) { - double delta_t; // duration of the time step in seconds, either dt or tb_step-tb_init + double delta_t; // duration of the timestep in seconds, either dt or tb_step-tb_init if (tb_init < 0) delta_t = dt; @@ -555,9 +555,9 @@ void processTimeStep(int tb_step, int tb_init) delta_t = (tb_step - tb_init) * dt; #if SYNAPSE_MODEL == DELTA - I_ext = normalRandomNumber() * sqrt(1/delta_t) * sigma_WN + I_0; + I_bg = normalRandomNumber() * sqrt(1/delta_t) * sigma_WN + I_0; #elif SYNAPSE_MODEL == MONOEXP - I_ext = (I_ext-I_0) * exp(-delta_t/tau_OU) + normalRandomNumber() * sqrt(1. - exp(-2.*delta_t/tau_OU)) * sigma_OU + I_0; // compute external synaptic input in nA + I_bg = (I_bg-I_0) * exp(-delta_t/tau_OU) + normalRandomNumber() * sqrt(1. - exp(-2.*delta_t/tau_OU)) * sigma_OU + I_0; // compute external synaptic input in nA #endif #if COND_BASED_SYN == ON @@ -572,72 +572,71 @@ void processTimeStep(int tb_step, int tb_init) #if NEURON_MODEL == MAT2 - // MAT(2) neuron - V = V * exp(-delta_t/tau_mem) + R_mem*(I_int + I_ext) * (1. - exp(-delta_t/tau_mem)); // compute mem. pot. in mV (analytical solution) + // MAT(2) neuron + V = V * exp(-delta_t/tau_mem) + R_mem*(I_int + I_bg) * (1. - exp(-delta_t/tau_mem)); // compute mem. pot. in mV (analytical solution) - exp1 = exp1 * exp(-delta_t/0.01); // fast threshold relaxation - exp2 = exp2 * exp(-delta_t/0.2); // slow threshold relaxation + exp1 = exp1 * exp(-delta_t/0.01); // fast threshold relaxation + exp2 = exp2 * exp(-delta_t/0.2); // slow threshold relaxation - if (active) - { + if (active) + { - exp1 = exp1 + 0.015; // add new spike with full contribution alpha_1 - exp2 = exp2 + 0.003; // add new spike with full contribution alpha_2 + exp1 = exp1 + 0.015; // add new spike with full contribution alpha_1 + exp2 = exp2 + 0.003; // add new spike with full contribution alpha_2 + + active = false; + } - active = false; - } - - ad_th = ad_th_limit + exp1 + exp2; // update adaptive threshold + ad_th = ad_th_limit + exp1 + exp2; // update adaptive threshold #elif NEURON_MODEL == LIF #if DENDR_SPIKES == ON - // exponential decay of dendritic spikes - I_dendr_A *= exp(- delta_t / tau_dendr_A); - I_dendr_B *= exp(- delta_t / tau_dendr_B); - I_dendr_C *= exp(- delta_t / tau_dendr_C); + // exponential decay of dendritic spikes + I_dendr_A *= exp(- delta_t / tau_dendr_A); + I_dendr_B *= exp(- delta_t / tau_dendr_B); + I_dendr_C *= exp(- delta_t / tau_dendr_C); - if (refractory_dendr > EPSILON) // if in refractory period for dendritic spikes - { - refractory_dendr -= delta_t; - } - else - { - if (dendr_inp_integral > dendr_spike_threshold) // threshold has been crossed + if (refractory_dendr > EPSILON) // if in refractory period for dendritic spikes { - // dendrite spike contributions do not have to be added up because possible remaining contributions should have decayed to zero - I_dendr_A = -55.; //I_dendr_A -= 55.; - I_dendr_B = 64.; //I_dendr_B += 64.; - I_dendr_C = -9.; //I_dendr_C -= 9.; - - refractory_dendr = t_ref_dendr; + refractory_dendr -= delta_t; + } + else + { + if (dendr_inp_integral > dendr_spike_threshold) // threshold has been crossed + { + // dendrite spike contributions do not have to be added up because possible remaining contributions should have decayed to zero + I_dendr_A = -55.; //I_dendr_A -= 55.; + I_dendr_B = 64.; //I_dendr_B += 64.; + I_dendr_C = -9.; //I_dendr_C -= 9.; + + refractory_dendr = t_ref_dendr; + } } - } - //compDendriticCurrent(tb_step*delta_t - I_dendr_A); - I_dendr = I_dendr_A + I_dendr_B + I_dendr_C; - - dendr_inp_integral -= dendr_inp_history[0]; // remove oldest contributions from integral - dendr_inp_history.erase(dendr_inp_history.begin()); // remove oldest contributions from history - dendr_inp_history.push_back(0.); // add slot for new contributions + //compDendriticCurrent(tb_step*delta_t - I_dendr_A); + I_dendr = I_dendr_A + I_dendr_B + I_dendr_C; + + dendr_inp_integral -= dendr_inp_history[0]; // remove oldest contributions from integral + dendr_inp_history.erase(dendr_inp_history.begin()); // remove oldest contributions from history + dendr_inp_history.push_back(0.); // add slot for new contributions #endif - // LIF - //V += delta_t/tau_mem * (- V + V_rev + R_mem*(I_ext + I_int)); // compute mem. pot. in mV (Euler method) - V = V * exp(-delta_t/tau_mem) + (V_rev + R_mem*( I_ext - + I_int + // LIF + //V += delta_t/tau_mem * (- V + V_rev + R_mem*(I_bg + I_int)); // compute mem. pot. in mV (Euler method) + V = V * exp(-delta_t/tau_mem) + (V_rev + R_mem*( I_bg + + I_int #if DENDR_SPIKES == ON - + I_dendr + + I_dendr #endif - )) * (1. - exp(-delta_t/tau_mem)); // compute mem. pot. in mV (analytical solution) + )) * (1. - exp(-delta_t/tau_mem)); // compute mem. pot. in mV (analytical solution) #endif - #ifdef TWO_NEURONS_ONE_SYNAPSE } // poisson_neuron == false @@ -647,17 +646,17 @@ void processTimeStep(int tb_step, int tb_init) V = V_reset; } // poisson_neuron == true #endif - if (cst.isSet() && tb_init < 0 && abs(I_cst = cst.get(tb_step)) > EPSILON) // stimulation; get stimulus current in nA + if (cst.isSet() && tb_init < 0 && abs(I_stim = cst.get(tb_step)) > EPSILON) // stimulation; get stimulus current in nA { #if STIM_TYPE == POISSON_STIMULATION - V += R_mem * I_cst; + V += R_mem * I_stim; #else - V += R_mem * I_cst * (1. - exp(-delta_t/tau_mem)); + V += R_mem * I_stim * (1. - exp(-delta_t/tau_mem)); #endif #ifdef TWO_NEURONS_ONE_SYNAPSE #if NEURON_MODEL == MAT2 - V = ad_th; // definite spiking (the magnitude of I_cst is not important as long as it is finite) + V = ad_th; // definite spiking (the magnitude of I_stim is not important as long as it is finite) #elif NEURON_MODEL == LIF V = V_th; #endif @@ -796,7 +795,7 @@ void setType(int _type) } /*** setSpikeHistoryMemory *** - * Sets the size that shall be reserved for the spike history * + * Sets the RAM size that shall be reserved for the spike history * * - int storage_steps: the size of the storage timespan in timesteps * * - return: the reserved size of the spike history vector */ int setSpikeHistoryMemory(int storage_steps) @@ -866,8 +865,8 @@ void reset() p_P = 0.0; p_C = 0.0; p_D = 0.0; - I_cst = 0.0; - I_ext = 0.0; + I_stim = 0.0; + I_bg = 0.0; #if COND_BASED_SYN == ON I_int_exc = 0.0; I_int_inh = 0.0; diff --git a/simulation-code/SpecialCases.hpp b/simulation-code/SpecialCases.hpp index c3062bd..05f9544 100644 --- a/simulation-code/SpecialCases.hpp +++ b/simulation-code/SpecialCases.hpp @@ -5,7 +5,14 @@ /*** Copyright 2017-2021 Jannik Luboeinski *** *** licensed under Apache-2.0 (http://www.apache.org/licenses/LICENSE-2.0) ***/ -// Special case to basic induction protocols for synaptic plasticity, with the same model as Li et al., 2016 +// Special case to measure the plasticity between two neurons as a function of their firing rates (uses prot_learn and prot_recall to stimulate) +//#define PLASTICITY_OVER_FREQ +#ifdef PLASTICITY_OVER_FREQ + #define TWO_NEURONS_ONE_SYNAPSE + // here, NEURON_MODEL and SYNAPSE_MODEL options are rather irrelevant because pre- and postsynaptic neuron are modeled as Poisson neurons +#endif + +// Special case of basic induction protocols for synaptic plasticity, with the same model as Li et al., 2016 // --> neuron 0 is stimulated via neuron 1, which is depolarized following the stimulus protocol // --> changes some global variables in int main() //#define TWO_NEURONS_ONE_SYNAPSE @@ -21,13 +28,32 @@ #undef PROTEIN_POOLS #define PROTEIN_POOLS POOLS_C // uses protein pool setting C #undef STIPULATE_CA - #define STIPULATE_CA OFF // switches off stipulation + #define STIPULATE_CA OFF // switches off stipulation of a cell assembly + #undef COND_BASED_SYN + #define COND_BASED_SYN OFF + #undef SYN_SCALING + #define SYN_SCALING OFF + #undef DENDR_SPIKES + #define DENDR_SPIKES OFF + #undef LTP_FR_THRESHOLD + #define LTP_FR_THRESHOLD OFF + #undef LTD_FR_THRESHOLD + #define LTD_FR_THRESHOLD OFF + #undef FF_AFTER_LEARN + #define FF_AFTER_LEARN ON + #undef FF_AFTER_STIM + #define FF_AFTER_STIM OFF + #undef FF_AFTER_NETLOAD + #define FF_AFTER_NETLOAD OFF + #undef OSCILL_INP + #define OSCILL_INP OFF #undef SAVE_NET_STATE #define SAVE_NET_STATE OFF // switches off saving the network state #endif -// Special case to basic induction protocols for synaptic plasticity, with monoxeponential synapses and LIF model +// Special case of basic induction protocols for synaptic plasticity, with monoxeponential synapses and LIF model +// as in Luboeinski and Tetzlaff, 2021, https://doi.org/10.1038/s42003-021-01778-y // --> neuron 0 is stimulated via neuron 1, which is depolarized following the stimulus protocol // --> changes some global variables in int main() //#define TWO_NEURONS_ONE_SYNAPSE_ALT @@ -44,34 +70,224 @@ #undef PROTEIN_POOLS #define PROTEIN_POOLS POOLS_C // uses protein pool setting C #undef STIPULATE_CA - #define STIPULATE_CA OFF // switches off stipulation + #define STIPULATE_CA OFF // switches off stipulation of a cell assembly + #undef COND_BASED_SYN + #define COND_BASED_SYN OFF + #undef SYN_SCALING + #define SYN_SCALING OFF + #undef DENDR_SPIKES + #define DENDR_SPIKES OFF + #undef LTP_FR_THRESHOLD + #define LTP_FR_THRESHOLD OFF + #undef LTD_FR_THRESHOLD + #define LTD_FR_THRESHOLD OFF + #undef FF_AFTER_LEARN + #define FF_AFTER_LEARN ON + #undef FF_AFTER_STIM + #define FF_AFTER_STIM OFF + #undef FF_AFTER_NETLOAD + #define FF_AFTER_NETLOAD OFF + #undef OSCILL_INP + #define OSCILL_INP OFF #undef SAVE_NET_STATE #define SAVE_NET_STATE OFF // switches off saving the network state #endif +// Special case for seeking mean input current +//#define SEEK_I_0 0.5 // if defined, I_const will be varied and the I_const value that leads to the defined mean frequency value (e.g., 0.5 Hz) in the absence of plasticity and stimulation will be determined +#ifdef SEEK_I_0 + #undef PLASTICITY + #define PLASTICITY OFF // switches off plasticity +#endif -// Special case to measure the weight between two neurons as a function of their firing rates (uses prot_learn and prot_recall to stimulate) -//#define PLASTICITY_OVER_FREQ -#ifdef PLASTICITY_OVER_FREQ - #define TWO_NEURONS_ONE_SYNAPSE +// Simulations to learn and consolidate organizational paradigms in Luboeinski and Tetzlaff, 2021, "Organization and priming of long-term memory representations with two-phase plasticity" +//#define ORGANIZATION_P2 +#ifdef ORGANIZATION_P2 #undef STIM_TYPE - #define STIM_TYPE POISSON_STIMULATION // uses Poisson-like spikes + #define STIM_TYPE OU_STIMULATION #undef NEURON_MODEL - #define NEURON_MODEL MAT2 // uses Multi-Adaptive Threshold Model + #define NEURON_MODEL LIF #undef SYNAPSE_MODEL - #define SYNAPSE_MODEL MONOEXP // uses delta synapses + #define SYNAPSE_MODEL MONOEXP #undef PLASTICITY - #define PLASTICITY OFF // switches on plasticity + #define PLASTICITY CALCIUM_AND_STC + #undef PROTEIN_POOLS + #define PROTEIN_POOLS POOLS_C #undef STIPULATE_CA - #define STIPULATE_CA OFF // switches off stipulation + #define STIPULATE_CA OFF + #undef CORE_SHAPE + #define CORE_SHAPE CORE_SHAPE_CMD // is to be set via compiler option, see shell script "compile_organization" + #undef CORE_SIZE + #define CORE_SIZE 600 + #undef COND_BASED_SYN + #define COND_BASED_SYN OFF + #undef SYN_SCALING + #define SYN_SCALING OFF + #undef DENDR_SPIKES + #define DENDR_SPIKES OFF + #undef LTP_FR_THRESHOLD + #define LTP_FR_THRESHOLD 40 + #undef LTD_FR_THRESHOLD + #define LTD_FR_THRESHOLD OFF + #undef FF_AFTER_LEARN + #define FF_AFTER_LEARN ON + #undef FF_AFTER_STIM + #define FF_AFTER_STIM ON // important for priming + #undef FF_AFTER_NETLOAD + #define FF_AFTER_NETLOAD OFF + #undef OSCILL_INP + #define OSCILL_INP OFF #endif -// Special case for seeking mean input current -//#define SEEK_I_0 0.5 // if defined, I_const will be varied and the I_const value that leads to the defined mean frequency value (e.g., 0.5 Hz) in the absence of plasticity and stimulation will be determined -#ifdef SEEK_I_0 + +// Simulations to learn and consolidate organizational paradigms in Luboeinski and Tetzlaff, 2021, "Organization and priming of long-term memory representations with two-phase plasticity" +// without LTD +//#define ORGANIZATION_NOLTD_P2 +#ifdef ORGANIZATION_NOLTD_P2 + #undef STIM_TYPE + #define STIM_TYPE OU_STIMULATION + #undef NEURON_MODEL + #define NEURON_MODEL LIF + #undef SYNAPSE_MODEL + #define SYNAPSE_MODEL MONOEXP #undef PLASTICITY - #define PLASTICITY OFF // switches off plasticity + #define PLASTICITY CALCIUM_AND_STC + #undef PROTEIN_POOLS + #define PROTEIN_POOLS POOLS_C + #undef STIPULATE_CA + #define STIPULATE_CA OFF + #undef CORE_SHAPE + #define CORE_SHAPE CORE_SHAPE_CMD // is to be set via compiler option, see shell script "compile_organization" + #undef CORE_SIZE + #define CORE_SIZE 600 + #undef COND_BASED_SYN + #define COND_BASED_SYN OFF + #undef SYN_SCALING + #define SYN_SCALING OFF + #undef DENDR_SPIKES + #define DENDR_SPIKES OFF + #undef LTP_FR_THRESHOLD + #define LTP_FR_THRESHOLD 40 + #undef LTD_FR_THRESHOLD + #define LTD_FR_THRESHOLD 1000 // effectively switches off LTD + #undef FF_AFTER_LEARN + #define FF_AFTER_LEARN ON + #undef FF_AFTER_STIM + #define FF_AFTER_STIM ON + #undef FF_AFTER_NETLOAD + #define FF_AFTER_NETLOAD OFF + #undef OSCILL_INP + #define OSCILL_INP OFF +#endif + +// Simulations to test recall in Luboeinski and Tetzlaff, 2021, "Organization and priming of long-term memory representations with two-phase plasticity" +//#define RECALL_P2 +#ifdef RECALL_P2 + #undef STIM_TYPE + #define STIM_TYPE OU_STIMULATION + #undef NEURON_MODEL + #define NEURON_MODEL LIF + #undef SYNAPSE_MODEL + #define SYNAPSE_MODEL MONOEXP + #undef PLASTICITY + #define PLASTICITY OFF // no plasticity + #undef PROTEIN_POOLS + #define PROTEIN_POOLS POOLS_C + #undef STIPULATE_CA + #define STIPULATE_CA OFF + #undef CORE_SHAPE + #define CORE_SHAPE CORE_SHAPE_CMD // is to be set via compiler option, see shell script "compile_organization" + #undef CORE_SIZE + #define CORE_SIZE 600 + #undef COND_BASED_SYN + #define COND_BASED_SYN OFF + #undef SYN_SCALING + #define SYN_SCALING OFF + #undef DENDR_SPIKES + #define DENDR_SPIKES OFF + #undef FF_AFTER_LEARN + #define FF_AFTER_LEARN OFF + #undef FF_AFTER_STIM + #define FF_AFTER_STIM OFF + #undef FF_AFTER_NETLOAD + #define FF_AFTER_NETLOAD OFF + #undef OSCILL_INP + #define OSCILL_INP OFF +#endif + +// Simulations to investigate the spontaneous activation of assemblies in Luboeinski and Tetzlaff, 2021, "Organization and priming of long-term memory representations with two-phase plasticity" +//#define ACTIVATION_P2 +#ifdef ACTIVATION_P2 + #undef STIM_TYPE + #define STIM_TYPE OU_STIMULATION + #undef NEURON_MODEL + #define NEURON_MODEL LIF + #undef SYNAPSE_MODEL + #define SYNAPSE_MODEL MONOEXP + #undef PLASTICITY + #define PLASTICITY OFF // no plasticity + #undef STIPULATE_CA + #define STIPULATE_CA OFF + #undef COND_BASED_SYN + #define COND_BASED_SYN OFF + #undef SYN_SCALING + #define SYN_SCALING OFF + #undef DENDR_SPIKES + #define DENDR_SPIKES OFF + #undef FF_AFTER_LEARN + #define FF_AFTER_LEARN OFF + #undef FF_AFTER_STIM + #define FF_AFTER_STIM OFF + #undef FF_AFTER_NETLOAD + #define FF_AFTER_NETLOAD OFF + #undef OSCILL_INP + #define OSCILL_INP OFF +#endif + +// Special case for simulations of memory consolidation and recall with intermediate recall in Luboeinski and Tetzlaff, 2021, https://doi.org/10.1038/s42003-021-01778-y +// (using the learning stimulation to apply an intermediate recall stimulus) +//#define INTERMEDIATE_RECALL_LT2021 +#ifdef INTERMEDIATE_RECALL_P1 + #define MEMORY_CONSOLIDATION_P1 + #define CORE_SIZE_CMD 150 +#endif + +// Simulations of memory consolidation and recall in Luboeinski and Tetzlaff, 2021, https://doi.org/10.1038/s42003-021-01778-y +//#define MEMORY_CONSOLIDATION_P1 +#ifdef MEMORY_CONSOLIDATION_P1 + #undef STIM_TYPE + #define STIM_TYPE OU_STIMULATION + #undef NEURON_MODEL + #define NEURON_MODEL LIF + #undef SYNAPSE_MODEL + #define SYNAPSE_MODEL MONOEXP + #undef PLASTICITY + #define PLASTICITY CALCIUM_AND_STC + #undef PROTEIN_POOLS + #define PROTEIN_POOLS POOLS_C + #undef STIPULATE_CA + #define STIPULATE_CA OFF + #undef CORE_SHAPE + #define CORE_SHAPE FIRST + #undef CORE_SIZE + #define CORE_SIZE CORE_SIZE_CMD // can be set via compiler option, see shell script "compile_sizes" + #undef COND_BASED_SYN + #define COND_BASED_SYN OFF + #undef SYN_SCALING + #define SYN_SCALING OFF + #undef DENDR_SPIKES + #define DENDR_SPIKES OFF + #undef LTP_FR_THRESHOLD + #define LTP_FR_THRESHOLD OFF + #undef LTD_FR_THRESHOLD + #define LTD_FR_THRESHOLD OFF + #undef FF_AFTER_LEARN + #define FF_AFTER_LEARN ON + #undef FF_AFTER_STIM + #define FF_AFTER_STIM OFF + #undef FF_AFTER_NETLOAD + #define FF_AFTER_NETLOAD ON + #undef OSCILL_INP + #define OSCILL_INP OFF #endif -// Special case for using the learning stimulation to apply an intermediate recall stimulus -//#ifdef INTERMEDIATE_RECALL diff --git a/simulation-code/Stimulus.cpp b/simulation-code/Stimulus.cpp index 8cf6b27..35d3699 100755 --- a/simulation-code/Stimulus.cpp +++ b/simulation-code/Stimulus.cpp @@ -21,29 +21,29 @@ class Stimulus { private: - int n; // number of time steps for the shape - double* shape; // deterministic stimulus magnitude values for all time steps + int n; // number of timesteps for the shape + double* shape; // deterministic stimulus magnitude values for all timesteps #if STIM_TYPE != DET_STIMULATION minstd_rand0 rg; // default uniform generator for random numbers #if STIM_TYPE == POISSON_STIMULATION double expc_spikes; // expected Poisson spikes per timestep (Poisson spike occurrence frequency times the duration of one timestep) double poisson_contrib; // magnitude of contribution of one Poisson spike int N_P; // number of Poisson neurons - //double* prob_dist; // probability distribution for firing of 1,2,3...N_P Poisson neurons in one time step + //double* prob_dist; // probability distribution for firing of 1,2,3...N_P Poisson neurons in one timestep uniform_real_distribution u_dist; // uniform distribution #elif STIM_TYPE == GAUSS_STIMULATION || STIM_TYPE == OU_STIMULATION normal_distribution n_dist; // normal distribution to obtain Gaussian white noise, constructed in Neuron class constructor double mean_stim; // mean of the Gaussian white noise or OU process used for stimulation double sigma_stim; // standard deviation of the Gaussian white noise or OU process used for stimulation ("discrete" standard deviation, contains 1/sqrt(dt) already) - double expOU; // exponential decay factor for one time step of OU process - double noisePrefactorOU; // pre-factor for one time step for white noise in OU formula - double stim_prev; // value of the OU process in previous time step + double expOU; // exponential decay factor for one timestep of OU process + double noisePrefactorOU; // pre-factor for one timestep for white noise in OU formula + double stim_prev; // value of the OU process in previous timestep #endif #endif public: - const int start; // time step at which the stimulus begins - const int end; // time step at which the stimulus begins + const int start; // timestep at which the stimulus begins + const int end; // timestep at which the stimulus begins /*** getShapeLength *** * Returns the length of one period of a deterministic Stimulus shape as number of time bins * @@ -121,7 +121,7 @@ class Stimulus expc_spikes = nd_freq * dt; N_P = N; - // compute probability distribution for firing of 1,2,3...N_P Poisson neurons in one time step + // compute probability distribution for firing of 1,2,3...N_P Poisson neurons in one timestep /*freeProbDist(); prob_dist = new double[N_P]; @@ -440,7 +440,7 @@ class Stimulus #endif int stimulation_start; // timestep at which all stimulation begins int stimulation_end; // timestep at which all stimulation ends - double dt; // duration of one time step in s + double dt; // duration of one timestep in s char* ppdata; // pre-processed stimulus data (indices for each timestep that indicate if there is stimulation and what kind of stimulation it is) public: @@ -471,7 +471,7 @@ class Stimulus * If there is a stimulus defined for the given time, returns the stimulus magnitude at this time * * (stimulation does only take place in defined stimulus intervals, the start of an interval marks * * the start of a stimulus shape period) * - * - t_step: time step at which to evaluate stimulus + * - t_step: timestep at which to evaluate stimulus * - return: stimulus at given time */ double get(int t_step) { @@ -484,19 +484,19 @@ class Stimulus for (int i=0; i= intervals[i].start && t_step < intervals[i].end) - return intervals[i].get(t_step); // get stimulus for this interval and time step + return intervals[i].get(t_step); // get stimulus for this interval and timestep } #if STIM_PREPROC == ON } #endif } - return 0.; // time step does not lie within any interval / does not contain stimulation + return 0.; // timestep does not lie within any interval / does not contain stimulation } /*** stimExists *** * If there is a stimulus defined for the given time, returns true * - * - t_step: time step at which to evaluate stimulus + * - t_step: timestep at which to evaluate stimulus * - return: does stimulus exist at given time */ bool stimExists(int t_step) { @@ -513,7 +513,7 @@ class Stimulus } #endif } - return false; // time step does not lie within any interval / does not contain stimulation + return false; // timestep does not lie within any interval / does not contain stimulation } @@ -860,7 +860,7 @@ class Stimulus /*** Principal constructor *** * Sets main characteristics of stimulus * - * - int _n: total time step count of one period */ + * - int _n: total timestep count of one period */ Stimulus(double _dt) : dt(_dt) { clear(); diff --git a/simulation-code/StimulusProtocols.hpp b/simulation-code/StimulusProtocols.hpp index aaff497..a1561c4 100755 --- a/simulation-code/StimulusProtocols.hpp +++ b/simulation-code/StimulusProtocols.hpp @@ -34,12 +34,13 @@ void stimFunc(Stimulus* st, double frequency, double w_stim, int N_stim, double * Creates a Stimulus object according to specified stimulation protocols * * - prot_learn: string specifying the learning protocol that shall be used * * - prot_recall: string specifying the learning protocol that shall be used * - * - dt: duration of one time step in seconds * + * - dt: duration of one timestep in seconds * * - w_stim: coupling strength between input layer and receiving layer * * - N_stim: number of neurons in the input layer * * - tau_syn: the synaptic time constant * + * - logf: pointer to log file handle (for printing interesting information) * * - return: Stimulus object */ -Stimulus createStimulusFromProtocols(string prot_learn, string prot_recall, double dt, double w_stim, int N_stim, double tau_syn) +Stimulus createStimulusFromProtocols(string prot_learn, string prot_recall, double dt, double w_stim, int N_stim, double tau_syn, ofstream* logf) { Stimulus st(dt); // new Stimulus object double frequency; // stimulation frequency @@ -150,6 +151,31 @@ Stimulus createStimulusFromProtocols(string prot_learn, string prot_recall, doub index = st.addStimulationInterval(int(round(11.0/dt)), int(round(11.1/dt))); // add start and end time of stimulation stimFunc(&st, frequency, w_stim, N_stim, tau_syn, index); // actually add stimulation to the interval + } + else if (strstr(pt, "TRIPLETat") == pt) // "generic" TRIPLET protocol + { + double duration, at; + char* pt2 = strstr(pt, "at"); + + frequency = 100.0; // Hz + + if (pt2 > 0) // if time of occurrence is specified + { + pt2 += 2; // skip 'a' and 't' + at = atof(pt2); + } + else // "standard" TRIPLET at t = 10.0 s + { + at = 10.0; + } + + index = st.addStimulationInterval(int(round(at/dt)), int(round((at + 0.1)/dt))); // add start and end time of stimulation + stimFunc(&st, frequency, w_stim, N_stim, tau_syn, index); // actually add stimulation to the interval + index = st.addStimulationInterval(int(round((at + 0.5)/dt)), int(round((at + 0.6)/dt))); // add start and end time of stimulation + stimFunc(&st, frequency, w_stim, N_stim, tau_syn, index); // actually add stimulation to the interval + index = st.addStimulationInterval(int(round((at + 1.0)/dt)), int(round((at + 1.1)/dt))); // add start and end time of stimulation + stimFunc(&st, frequency, w_stim, N_stim, tau_syn, index); // actually add stimulation to the interval + } else if (!prot_learn.compare("TESTA")) { @@ -166,23 +192,9 @@ Stimulus createStimulusFromProtocols(string prot_learn, string prot_recall, doub //index = st.addStimulationInterval(int(round(3.0/dt)), int(round(3.1/dt))); // add start and end time of stimulation //stimFunc(&st, 2*frequency, w_stim, N_stim, tau_syn, index); // actually add stimulation to the interval } - else if (!prot_learn.compare("TEST10")) + else { - frequency = 100.0; // Hz - index = st.addStimulationInterval(int(round(1.0/dt)), int(round(2.0/dt))); // add start and end time of stimulation - stimFunc(&st, frequency, w_stim, N_stim, tau_syn, index); // actually add stimulation to the interval - } - else if (!prot_learn.compare("TEST3")) - { - frequency = 100.0; // Hz - index = st.addStimulationInterval(int(round(1.0/dt)), int(round(1.3/dt))); // add start and end time of stimulation - stimFunc(&st, frequency, w_stim, N_stim, tau_syn, index); // actually add stimulation to the interval - } - else if (!prot_learn.compare("TESTSAV")) - { - frequency = 100.0; // Hz - index = st.addStimulationInterval(int(round(0.1/dt)), int(round(0.2/dt))); // add start and end time of stimulation - stimFunc(&st, frequency, w_stim, N_stim, tau_syn, index); // actually add stimulation to the interval + *logf << "No known learning protocol specified." << endl; } // recall protocol: @@ -242,8 +254,13 @@ Stimulus createStimulusFromProtocols(string prot_learn, string prot_recall, doub //index = st.addStimulationInterval(int(round(3.0/dt)), int(round(3.2/dt))); // add start and end time of stimulation index = st.addStimulationInterval(int(round(0.2/dt)), int(round(0.205/dt))); // add start and end time of stimulation stimFunc(&st, frequency, w_stim, N_stim, tau_syn, index); // actually add stimulation to the interval + } + else + { + *logf << "No known recall protocol specified." << endl; } + #if STIM_PREPROC == ON st.preProcess(); #endif @@ -253,8 +270,8 @@ Stimulus createStimulusFromProtocols(string prot_learn, string prot_recall, doub /*** createOscillStimulus *** * Creates a Stimulus object with sinusoidal oscillating input during the whole simulation * - * - dt: duration of one time step in seconds * - * - tb_max: number of time steps for the whole simulation * + * - dt: duration of one timestep in seconds * + * - tb_max: number of timesteps for the whole simulation * * - period: period for the sine-shaped input current * * - mean: mean value for the sine-shaped input current * * - amplitude: amplitude for the sine-shaped input current * diff --git a/simulation-code/compile_2N1S b/simulation-code/build_scripts_paper1/compile_2N1S similarity index 83% rename from simulation-code/compile_2N1S rename to simulation-code/build_scripts_paper1/compile_2N1S index 52c4e6b..e749128 100755 --- a/simulation-code/compile_2N1S +++ b/simulation-code/build_scripts_paper1/compile_2N1S @@ -1,4 +1,8 @@ #!/bin/sh + +current_working_dir=${PWD} +cd .. + rm -f code.zip zip -q -D code.zip * -x *.out *.o *.txt g++ -std=c++11 \ @@ -9,3 +13,5 @@ g++ -std=c++11 \ -lboost_serialization -static \ -o "2N1S.out" rm code.zip + +cd "${current_working_dir}" diff --git a/simulation-code/compile_irs b/simulation-code/build_scripts_paper1/compile_IRS similarity index 68% rename from simulation-code/compile_irs rename to simulation-code/build_scripts_paper1/compile_IRS index 2da5d05..d8449f3 100755 --- a/simulation-code/compile_irs +++ b/simulation-code/build_scripts_paper1/compile_IRS @@ -1,11 +1,17 @@ #!/bin/sh + +current_working_dir=${PWD} +cd .. + rm -f code.zip zip -q -D code.zip * -x *.out *.o *.txt g++ -std=c++11 \ -O2 NetworkMain.cpp \ -Wl,--format=binary -Wl,code.zip -Wl,plotFunctions.py -Wl,--format=default \ - -D INTERMEDIATE_RECALL \ + -D INTERMEDIATE_RECALL_P1 \ -Wno-unused-result \ -lboost_serialization -static \ - -o "net_irs.out" + -o "net150_IRS.out" rm code.zip + +cd "${current_working_dir}" diff --git a/simulation-code/build_scripts_paper1/compile_sizes b/simulation-code/build_scripts_paper1/compile_sizes new file mode 100755 index 0000000..80814ab --- /dev/null +++ b/simulation-code/build_scripts_paper1/compile_sizes @@ -0,0 +1,48 @@ +#!/bin/sh + +current_working_dir=${PWD} +cd .. + +rm -f code.zip +zip -q -D code.zip * -x *.out *.o *.txt +g++ -std=c++11 -O2 NetworkMain.cpp -Wl,--format=binary -Wl,code.zip -Wl,plotFunctions.py -Wl,--format=default -D MEMORY_CONSOLIDATION_P1 \ + -D CORE_SIZE_CMD=50 -Wno-unused-result -lboost_serialization -static -o "net50.out" +rm -f code.zip +zip -q -D code.zip * -x *.out *.o *.txt +g++ -std=c++11 -O2 NetworkMain.cpp -Wl,--format=binary -Wl,code.zip -Wl,plotFunctions.py -Wl,--format=default -D MEMORY_CONSOLIDATION_P1 \ + -D CORE_SIZE_CMD=100 -Wno-unused-result -lboost_serialization -static -o "net100.out" +rm -f code.zip +zip -q -D code.zip * -x *.out *.o *.txt +g++ -std=c++11 -O2 NetworkMain.cpp -Wl,--format=binary -Wl,code.zip -Wl,plotFunctions.py -Wl,--format=default -D MEMORY_CONSOLIDATION_P1 \ + -D CORE_SIZE_CMD=150 -Wno-unused-result -lboost_serialization -static -o "net150.out" +rm -f code.zip +zip -q -D code.zip * -x *.out *.o *.txt +g++ -std=c++11 -O2 NetworkMain.cpp -Wl,--format=binary -Wl,code.zip -Wl,plotFunctions.py -Wl,--format=default -D MEMORY_CONSOLIDATION_P1 \ + -D CORE_SIZE_CMD=200 -Wno-unused-result -lboost_serialization -static -o "net200.out" +rm -f code.zip +zip -q -D code.zip * -x *.out *.o *.txt +g++ -std=c++11 -O2 NetworkMain.cpp -Wl,--format=binary -Wl,code.zip -Wl,plotFunctions.py -Wl,--format=default -D MEMORY_CONSOLIDATION_P1 \ + -D CORE_SIZE_CMD=250 -Wno-unused-result -lboost_serialization -static -o "net250.out" +rm -f code.zip +zip -q -D code.zip * -x *.out *.o *.txt +g++ -std=c++11 -O2 NetworkMain.cpp -Wl,--format=binary -Wl,code.zip -Wl,plotFunctions.py -Wl,--format=default -D MEMORY_CONSOLIDATION_P1 \ + -D CORE_SIZE_CMD=300 -Wno-unused-result -lboost_serialization -static -o "net300.out" +rm -f code.zip +zip -q -D code.zip * -x *.out *.o *.txt +g++ -std=c++11 -O2 NetworkMain.cpp -Wl,--format=binary -Wl,code.zip -Wl,plotFunctions.py -Wl,--format=default -D MEMORY_CONSOLIDATION_P1 \ + -D CORE_SIZE_CMD=350 -Wno-unused-result -lboost_serialization -static -o "net350.out" +rm code.zip +zip -q -D code.zip * -x *.out *.o *.txt +g++ -std=c++11 -O2 NetworkMain.cpp -Wl,--format=binary -Wl,code.zip -Wl,plotFunctions.py -Wl,--format=default -D MEMORY_CONSOLIDATION_P1 \ + -D CORE_SIZE_CMD=400 -Wno-unused-result -lboost_serialization -static -o "net400.out" +rm code.zip +zip -q -D code.zip * -x *.out *.o *.txt +g++ -std=c++11 -O2 NetworkMain.cpp -Wl,--format=binary -Wl,code.zip -Wl,plotFunctions.py -Wl,--format=default -D MEMORY_CONSOLIDATION_P1 \ + -D CORE_SIZE_CMD=450 -Wno-unused-result -lboost_serialization -static -o "net450.out" +rm code.zip +zip -q -D code.zip * -x *.out *.o *.txt +g++ -std=c++11 -O2 NetworkMain.cpp -Wl,--format=binary -Wl,code.zip -Wl,plotFunctions.py -Wl,--format=default -D MEMORY_CONSOLIDATION_P1 \ + -D CORE_SIZE_CMD=500 -Wno-unused-result -lboost_serialization -static -o "net500.out" +rm code.zip + +cd "${current_working_dir}" diff --git a/simulation-code/build_scripts_paper2/compile_activation b/simulation-code/build_scripts_paper2/compile_activation new file mode 100755 index 0000000..fddf42f --- /dev/null +++ b/simulation-code/build_scripts_paper2/compile_activation @@ -0,0 +1,12 @@ +#!/bin/sh + +current_working_dir=${PWD} +cd .. + +rm -f code.zip +zip -q -D code.zip * -x *.out *.o *.txt +g++ -std=c++11 -O2 NetworkMain.cpp -Wl,--format=binary -Wl,code.zip -Wl,plotFunctions.py -Wl,--format=default -D ACTIVATION_P2 \ + -Wno-unused-result -lboost_serialization -static -o "net_activation.out" +rm -f code.zip + +cd "${current_working_dir}" diff --git a/simulation-code/build_scripts_paper2/compile_organization b/simulation-code/build_scripts_paper2/compile_organization new file mode 100755 index 0000000..a5a5c11 --- /dev/null +++ b/simulation-code/build_scripts_paper2/compile_organization @@ -0,0 +1,66 @@ +#!/bin/sh + +current_working_dir=${PWD} +cd .. + +# see 'Definitions.hpp' for values of CORE_SHAPE_CMD + +rm -f code.zip +zip -q -D code.zip * -x *.out *.o *.txt +mkdir -p "organization/1st/FIRST" +g++ -std=c++11 -O2 NetworkMain.cpp -Wl,--format=binary -Wl,code.zip -Wl,plotFunctions.py -Wl,--format=default -D ORGANIZATION_P2 \ + -D CORE_SHAPE_CMD=1 -Wno-unused-result -lboost_serialization -static -o "organization/1st/FIRST/net.out" +mkdir -p "organization/2nd/NOOVERLAP" +g++ -std=c++11 -O2 NetworkMain.cpp -Wl,--format=binary -Wl,code.zip -Wl,plotFunctions.py -Wl,--format=default -D ORGANIZATION_P2 \ + -D CORE_SHAPE_CMD=2 -Wno-unused-result -lboost_serialization -static -o "organization/2nd/NOOVERLAP/net.out" +mkdir -p "organization/2nd/OVERLAP10" +g++ -std=c++11 -O2 NetworkMain.cpp -Wl,--format=binary -Wl,code.zip -Wl,plotFunctions.py -Wl,--format=default -D ORGANIZATION_P2 \ + -D CORE_SHAPE_CMD=3 -Wno-unused-result -lboost_serialization -static -o "organization/2nd/OVERLAP10/net.out" +mkdir -p "organization/2nd/OVERLAP15" +g++ -std=c++11 -O2 NetworkMain.cpp -Wl,--format=binary -Wl,code.zip -Wl,plotFunctions.py -Wl,--format=default -D ORGANIZATION_P2 \ + -D CORE_SHAPE_CMD=4 -Wno-unused-result -lboost_serialization -static -o "organization/2nd/OVERLAP15/net.out" +mkdir -p "organization/2nd/OVERLAP20" +g++ -std=c++11 -O2 NetworkMain.cpp -Wl,--format=binary -Wl,code.zip -Wl,plotFunctions.py -Wl,--format=default -D ORGANIZATION_P2 \ + -D CORE_SHAPE_CMD=5 -Wno-unused-result -lboost_serialization -static -o "organization/2nd/OVERLAP20/net.out" +mkdir -p "organization/3rd/NOOVERLAP" +g++ -std=c++11 -O2 NetworkMain.cpp -Wl,--format=binary -Wl,code.zip -Wl,plotFunctions.py -Wl,--format=default -D ORGANIZATION_P2 \ + -D CORE_SHAPE_CMD=6 -Wno-unused-result -lboost_serialization -static -o "organization/3rd/NOOVERLAP/net.out" +mkdir -p "organization/3rd/OVERLAP10" +g++ -std=c++11 -O2 NetworkMain.cpp -Wl,--format=binary -Wl,code.zip -Wl,plotFunctions.py -Wl,--format=default -D ORGANIZATION_P2 \ + -D CORE_SHAPE_CMD=7 -Wno-unused-result -lboost_serialization -static -o "organization/3rd/OVERLAP10/net.out" +mkdir -p "organization/3rd/OVERLAP10 no ABC" +g++ -std=c++11 -O2 NetworkMain.cpp -Wl,--format=binary -Wl,code.zip -Wl,plotFunctions.py -Wl,--format=default -D ORGANIZATION_P2 \ + -D CORE_SHAPE_CMD=8 -Wno-unused-result -lboost_serialization -static -o "organization/3rd/OVERLAP10 no ABC/net.out" +mkdir -p "organization/3rd/OVERLAP10 no AC, no ABC" +g++ -std=c++11 -O2 NetworkMain.cpp -Wl,--format=binary -Wl,code.zip -Wl,plotFunctions.py -Wl,--format=default -D ORGANIZATION_P2 \ + -D CORE_SHAPE_CMD=9 -Wno-unused-result -lboost_serialization -static -o "organization/3rd/OVERLAP10 no AC, no ABC/net.out" +mkdir -p "organization/3rd/OVERLAP10 no BC, no ABC" +g++ -std=c++11 -O2 NetworkMain.cpp -Wl,--format=binary -Wl,code.zip -Wl,plotFunctions.py -Wl,--format=default -D ORGANIZATION_P2 \ + -D CORE_SHAPE_CMD=10 -Wno-unused-result -lboost_serialization -static -o "organization/3rd/OVERLAP10 no BC, no ABC/net.out" +mkdir -p "organization/3rd/OVERLAP15" +g++ -std=c++11 -O2 NetworkMain.cpp -Wl,--format=binary -Wl,code.zip -Wl,plotFunctions.py -Wl,--format=default -D ORGANIZATION_P2 \ + -D CORE_SHAPE_CMD=11 -Wno-unused-result -lboost_serialization -static -o "organization/3rd/OVERLAP15/net.out" +mkdir -p "organization/3rd/OVERLAP15 no ABC" +g++ -std=c++11 -O2 NetworkMain.cpp -Wl,--format=binary -Wl,code.zip -Wl,plotFunctions.py -Wl,--format=default -D ORGANIZATION_P2 \ + -D CORE_SHAPE_CMD=12 -Wno-unused-result -lboost_serialization -static -o "organization/3rd/OVERLAP15 no ABC/net.out" +mkdir -p "organization/3rd/OVERLAP15 no AC, no ABC" +g++ -std=c++11 -O2 NetworkMain.cpp -Wl,--format=binary -Wl,code.zip -Wl,plotFunctions.py -Wl,--format=default -D ORGANIZATION_P2 \ + -D CORE_SHAPE_CMD=13 -Wno-unused-result -lboost_serialization -static -o "organization/3rd/OVERLAP15 no AC, no ABC/net.out" +mkdir -p "organization/3rd/OVERLAP15 no BC, no ABC" +g++ -std=c++11 -O2 NetworkMain.cpp -Wl,--format=binary -Wl,code.zip -Wl,plotFunctions.py -Wl,--format=default -D ORGANIZATION_P2 \ + -D CORE_SHAPE_CMD=14 -Wno-unused-result -lboost_serialization -static -o "organization/3rd/OVERLAP15 no BC, no ABC/net.out" +mkdir -p "organization/3rd/OVERLAP20" +g++ -std=c++11 -O2 NetworkMain.cpp -Wl,--format=binary -Wl,code.zip -Wl,plotFunctions.py -Wl,--format=default -D ORGANIZATION_P2 \ + -D CORE_SHAPE_CMD=15 -Wno-unused-result -lboost_serialization -static -o "organization/3rd/OVERLAP20/net.out" +mkdir -p "organization/3rd/OVERLAP20 no ABC" +g++ -std=c++11 -O2 NetworkMain.cpp -Wl,--format=binary -Wl,code.zip -Wl,plotFunctions.py -Wl,--format=default -D ORGANIZATION_P2 \ + -D CORE_SHAPE_CMD=16 -Wno-unused-result -lboost_serialization -static -o "organization/3rd/OVERLAP20 no ABC/net.out" +mkdir -p "organization/3rd/OVERLAP20 no AC, no ABC" +g++ -std=c++11 -O2 NetworkMain.cpp -Wl,--format=binary -Wl,code.zip -Wl,plotFunctions.py -Wl,--format=default -D ORGANIZATION_P2 \ + -D CORE_SHAPE_CMD=17 -Wno-unused-result -lboost_serialization -static -o "organization/3rd/OVERLAP20 no AC, no ABC/net.out" +mkdir -p "organization/3rd/OVERLAP20 no BC, no ABC" +g++ -std=c++11 -O2 NetworkMain.cpp -Wl,--format=binary -Wl,code.zip -Wl,plotFunctions.py -Wl,--format=default -D ORGANIZATION_P2 \ + -D CORE_SHAPE_CMD=18 -Wno-unused-result -lboost_serialization -static -o "organization/3rd/OVERLAP20 no BC, no ABC/net.out" +rm -f code.zip + +cd "${current_working_dir}" diff --git a/simulation-code/build_scripts_paper2/compile_organization_noLTD b/simulation-code/build_scripts_paper2/compile_organization_noLTD new file mode 100755 index 0000000..530f242 --- /dev/null +++ b/simulation-code/build_scripts_paper2/compile_organization_noLTD @@ -0,0 +1,66 @@ +#!/bin/sh + +current_working_dir=${PWD} +cd .. + +# see 'Definitions.hpp' for values of CORE_SHAPE_CMD + +rm -f code.zip +zip -q -D code.zip * -x *.out *.o *.txt +mkdir -p "organization_noLTD/1st/FIRST" +g++ -std=c++11 -O2 NetworkMain.cpp -Wl,--format=binary -Wl,code.zip -Wl,plotFunctions.py -Wl,--format=default -D ORGANIZATION_NOLTD_P2 \ + -D CORE_SHAPE_CMD=1 -Wno-unused-result -lboost_serialization -static -o "organization_noLTD/1st/FIRST/net.out" +mkdir -p "organization_noLTD/2nd/NOOVERLAP" +g++ -std=c++11 -O2 NetworkMain.cpp -Wl,--format=binary -Wl,code.zip -Wl,plotFunctions.py -Wl,--format=default -D ORGANIZATION_NOLTD_P2 \ + -D CORE_SHAPE_CMD=2 -Wno-unused-result -lboost_serialization -static -o "organization_noLTD/2nd/NOOVERLAP/net.out" +mkdir -p "organization_noLTD/2nd/OVERLAP10" +g++ -std=c++11 -O2 NetworkMain.cpp -Wl,--format=binary -Wl,code.zip -Wl,plotFunctions.py -Wl,--format=default -D ORGANIZATION_NOLTD_P2 \ + -D CORE_SHAPE_CMD=3 -Wno-unused-result -lboost_serialization -static -o "organization_noLTD/2nd/OVERLAP10/net.out" +mkdir -p "organization_noLTD/2nd/OVERLAP15" +g++ -std=c++11 -O2 NetworkMain.cpp -Wl,--format=binary -Wl,code.zip -Wl,plotFunctions.py -Wl,--format=default -D ORGANIZATION_NOLTD_P2 \ + -D CORE_SHAPE_CMD=4 -Wno-unused-result -lboost_serialization -static -o "organization_noLTD/2nd/OVERLAP15/net.out" +mkdir -p "organization_noLTD/2nd/OVERLAP20" +g++ -std=c++11 -O2 NetworkMain.cpp -Wl,--format=binary -Wl,code.zip -Wl,plotFunctions.py -Wl,--format=default -D ORGANIZATION_NOLTD_P2 \ + -D CORE_SHAPE_CMD=5 -Wno-unused-result -lboost_serialization -static -o "organization_noLTD/2nd/OVERLAP20/net.out" +mkdir -p "organization_noLTD/3rd/NOOVERLAP" +g++ -std=c++11 -O2 NetworkMain.cpp -Wl,--format=binary -Wl,code.zip -Wl,plotFunctions.py -Wl,--format=default -D ORGANIZATION_NOLTD_P2 \ + -D CORE_SHAPE_CMD=6 -Wno-unused-result -lboost_serialization -static -o "organization_noLTD/3rd/NOOVERLAP/net.out" +mkdir -p "organization_noLTD/3rd/OVERLAP10" +g++ -std=c++11 -O2 NetworkMain.cpp -Wl,--format=binary -Wl,code.zip -Wl,plotFunctions.py -Wl,--format=default -D ORGANIZATION_NOLTD_P2 \ + -D CORE_SHAPE_CMD=7 -Wno-unused-result -lboost_serialization -static -o "organization_noLTD/3rd/OVERLAP10/net.out" +mkdir -p "organization_noLTD/3rd/OVERLAP10 no ABC" +g++ -std=c++11 -O2 NetworkMain.cpp -Wl,--format=binary -Wl,code.zip -Wl,plotFunctions.py -Wl,--format=default -D ORGANIZATION_NOLTD_P2 \ + -D CORE_SHAPE_CMD=8 -Wno-unused-result -lboost_serialization -static -o "organization_noLTD/3rd/OVERLAP10 no ABC/net.out" +mkdir -p "organization_noLTD/3rd/OVERLAP10 no AC, no ABC" +g++ -std=c++11 -O2 NetworkMain.cpp -Wl,--format=binary -Wl,code.zip -Wl,plotFunctions.py -Wl,--format=default -D ORGANIZATION_NOLTD_P2 \ + -D CORE_SHAPE_CMD=9 -Wno-unused-result -lboost_serialization -static -o "organization_noLTD/3rd/OVERLAP10 no AC, no ABC/net.out" +mkdir -p "organization_noLTD/3rd/OVERLAP10 no BC, no ABC" +g++ -std=c++11 -O2 NetworkMain.cpp -Wl,--format=binary -Wl,code.zip -Wl,plotFunctions.py -Wl,--format=default -D ORGANIZATION_NOLTD_P2 \ + -D CORE_SHAPE_CMD=10 -Wno-unused-result -lboost_serialization -static -o "organization_noLTD/3rd/OVERLAP10 no BC, no ABC/net.out" +mkdir -p "organization_noLTD/3rd/OVERLAP15" +g++ -std=c++11 -O2 NetworkMain.cpp -Wl,--format=binary -Wl,code.zip -Wl,plotFunctions.py -Wl,--format=default -D ORGANIZATION_NOLTD_P2 \ + -D CORE_SHAPE_CMD=11 -Wno-unused-result -lboost_serialization -static -o "organization_noLTD/3rd/OVERLAP15/net.out" +mkdir -p "organization_noLTD/3rd/OVERLAP15 no ABC" +g++ -std=c++11 -O2 NetworkMain.cpp -Wl,--format=binary -Wl,code.zip -Wl,plotFunctions.py -Wl,--format=default -D ORGANIZATION_NOLTD_P2 \ + -D CORE_SHAPE_CMD=12 -Wno-unused-result -lboost_serialization -static -o "organization_noLTD/3rd/OVERLAP15 no ABC/net.out" +mkdir -p "organization_noLTD/3rd/OVERLAP15 no AC, no ABC" +g++ -std=c++11 -O2 NetworkMain.cpp -Wl,--format=binary -Wl,code.zip -Wl,plotFunctions.py -Wl,--format=default -D ORGANIZATION_NOLTD_P2 \ + -D CORE_SHAPE_CMD=13 -Wno-unused-result -lboost_serialization -static -o "organization_noLTD/3rd/OVERLAP15 no AC, no ABC/net.out" +mkdir -p "organization_noLTD/3rd/OVERLAP15 no BC, no ABC" +g++ -std=c++11 -O2 NetworkMain.cpp -Wl,--format=binary -Wl,code.zip -Wl,plotFunctions.py -Wl,--format=default -D ORGANIZATION_NOLTD_P2 \ + -D CORE_SHAPE_CMD=14 -Wno-unused-result -lboost_serialization -static -o "organization_noLTD/3rd/OVERLAP15 no BC, no ABC/net.out" +mkdir -p "organization_noLTD/3rd/OVERLAP20" +g++ -std=c++11 -O2 NetworkMain.cpp -Wl,--format=binary -Wl,code.zip -Wl,plotFunctions.py -Wl,--format=default -D ORGANIZATION_NOLTD_P2 \ + -D CORE_SHAPE_CMD=15 -Wno-unused-result -lboost_serialization -static -o "organization_noLTD/3rd/OVERLAP20/net.out" +mkdir -p "organization_noLTD/3rd/OVERLAP20 no ABC" +g++ -std=c++11 -O2 NetworkMain.cpp -Wl,--format=binary -Wl,code.zip -Wl,plotFunctions.py -Wl,--format=default -D ORGANIZATION_NOLTD_P2 \ + -D CORE_SHAPE_CMD=16 -Wno-unused-result -lboost_serialization -static -o "organization_noLTD/3rd/OVERLAP20 no ABC/net.out" +mkdir -p "organization_noLTD/3rd/OVERLAP20 no AC, no ABC" +g++ -std=c++11 -O2 NetworkMain.cpp -Wl,--format=binary -Wl,code.zip -Wl,plotFunctions.py -Wl,--format=default -D ORGANIZATION_NOLTD_P2 \ + -D CORE_SHAPE_CMD=17 -Wno-unused-result -lboost_serialization -static -o "organization_noLTD/3rd/OVERLAP20 no AC, no ABC/net.out" +mkdir -p "organization_noLTD/3rd/OVERLAP20 no BC, no ABC" +g++ -std=c++11 -O2 NetworkMain.cpp -Wl,--format=binary -Wl,code.zip -Wl,plotFunctions.py -Wl,--format=default -D ORGANIZATION_NOLTD_P2 \ + -D CORE_SHAPE_CMD=18 -Wno-unused-result -lboost_serialization -static -o "organization_noLTD/3rd/OVERLAP20 no BC, no ABC/net.out" +rm -f code.zip + +cd "${current_working_dir}" diff --git a/simulation-code/build_scripts_paper2/compile_recall b/simulation-code/build_scripts_paper2/compile_recall new file mode 100755 index 0000000..5a2cfec --- /dev/null +++ b/simulation-code/build_scripts_paper2/compile_recall @@ -0,0 +1,24 @@ +#!/bin/sh + +current_working_dir=${PWD} +cd .. + +# see 'Definitions.hpp' for values of CORE_SHAPE_CMD + +rm -f code.zip +zip -q -D code.zip * -x *.out *.o *.txt + +mkdir "recall" + +g++ -std=c++11 -O2 NetworkMain.cpp -Wl,--format=binary -Wl,code.zip -Wl,plotFunctions.py -Wl,--format=default -D RECALL_P2 \ + -D CORE_SHAPE_CMD=1 -Wno-unused-result -lboost_serialization -static -o "recall/netA.out" + +g++ -std=c++11 -O2 NetworkMain.cpp -Wl,--format=binary -Wl,code.zip -Wl,plotFunctions.py -Wl,--format=default -D RECALL_P2 \ + -D CORE_SHAPE_CMD=2 -Wno-unused-result -lboost_serialization -static -o "recall/netB.out" + +g++ -std=c++11 -O2 NetworkMain.cpp -Wl,--format=binary -Wl,code.zip -Wl,plotFunctions.py -Wl,--format=default -D RECALL_P2 \ + -D CORE_SHAPE_CMD=6 -Wno-unused-result -lboost_serialization -static -o "recall/netC.out" + +rm -f code.zip + +cd "${current_working_dir}" diff --git a/simulation-code/compile b/simulation-code/compile deleted file mode 100755 index 9f88fdd..0000000 --- a/simulation-code/compile +++ /dev/null @@ -1,10 +0,0 @@ -#!/bin/sh -rm -f code.zip -zip -q -D code.zip * -x *.out *.o *.txt -g++ -std=c++11 \ - -O2 NetworkMain.cpp \ - -Wl,--format=binary -Wl,code.zip -Wl,plotFunctions.py -Wl,--format=default \ - -Wno-unused-result \ - -lboost_serialization -static \ - -o "net.out" -rm code.zip diff --git a/simulation-code/plotFunctions.py b/simulation-code/plotFunctions.py index 9af23bc..9621204 100755 --- a/simulation-code/plotFunctions.py +++ b/simulation-code/plotFunctions.py @@ -122,9 +122,9 @@ def logScale_w(x, h_0, z_max): # readWeightMatrixData # Reads complete weight matrix data from a file # filename: name of the file to read the data from -# Nl: number of neurons in one row/column +# Nl_exc: number of excitatory neurons in one row/column # return: the adjacency matrix, the early-phase weight matrix, the late-phase weight matrix, the firing rate vector -def readWeightMatrixData(filename, Nl): +def readWeightMatrixData(filename, Nl_exc): # read weight matrices and firing rates from file with open(plot_folder + filename) as f: @@ -137,23 +137,23 @@ def readWeightMatrixData(filename, Nl): rows = len(rawmatrix_v) - if (rows != len(rawmatrix_v[0].split('\t\t'))) or (rows != Nl): + if (rows != len(rawmatrix_v[0].split('\t\t'))) or (rows != Nl_exc): print('Data file error in "' + filename + '"') f.close() exit() - v = np.zeros((Nl,Nl)) - h = np.zeros((Nl**2,Nl**2)) - z = np.zeros((Nl**2,Nl**2)) + v = np.zeros((Nl_exc,Nl_exc)) + h = np.zeros((Nl_exc**2,Nl_exc**2)) + z = np.zeros((Nl_exc**2,Nl_exc**2)) - for i in range(Nl**2): - if i < Nl: + for i in range(Nl_exc**2): + if i < Nl_exc: value0 = rawmatrix_v[i].split('\t\t') value1 = rawmatrix_h[i].split('\t\t') value2 = rawmatrix_z[i].split('\t\t') - for j in range(Nl**2): - if i < Nl and j < Nl: + for j in range(Nl_exc**2): + if i < Nl_exc and j < Nl_exc: v[i][j] = float(value0[j]) h[i][j] = float(value1[j]) z[i][j] = float(value2[j]) @@ -169,9 +169,9 @@ def readWeightMatrixData(filename, Nl): # h_0: initial early-phase weight # z_min: minimum late-phase weight # z_max: maximum late-phase weight -# Nl: number of neurons in one row/column +# Nl_exc: number of excitatory neurons in one row/column # title [optional]: main title of the figure -def plotWeights(filename, h_0, z_min, z_max, Nl, title = "Weight matrices"): +def plotWeights(filename, h_0, z_min, z_max, Nl_exc, title = "Weight matrices"): # normalization factors h_max = 2*h_0 @@ -183,7 +183,7 @@ def plotWeights(filename, h_0, z_min, z_max, Nl, title = "Weight matrices"): cmasymmetric_w = shiftedColorMap(cm.seismic, start=0, midpoint=h_0/w_max, stop=1, name='asymmetric_w') # read weight matrix data - connections, h, z, v = readWeightMatrixData(filename, Nl) + connections, h, z, v = readWeightMatrixData(filename, Nl_exc) connections = np.flip(connections, 0) h = np.flip(h, 0) z = np.flip(z, 0) @@ -192,7 +192,7 @@ def plotWeights(filename, h_0, z_min, z_max, Nl, title = "Weight matrices"): v_max = np.amax(v) if v_max == 0.0: v_max = 1.0 - v_data = v.reshape(int(Nl),int(Nl)) / v_max + v_data = v.reshape(int(Nl_exc),int(Nl_exc)) / v_max # plotting plt.figure(figsize=(20,12)) @@ -244,9 +244,9 @@ def plotWeights(filename, h_0, z_min, z_max, Nl, title = "Weight matrices"): # h_0: initial early-phase weight # z_min: minimum late-phase weight # z_max: maximum late-phase weight -# Nl: number of neurons in one row/column +# Nl_exc: number of excitatory neurons in one row/column # title [optional]: main title of the figure -def plotWeightDiffs(filename1, filename2, h_0, z_min, z_max, Nl, title = "Weight matrices"): +def plotWeightDiffs(filename1, filename2, h_0, z_min, z_max, Nl_exc, title = "Weight matrices"): # Colormaps for Calcium simulation # colormaps for h and z @@ -258,8 +258,8 @@ def plotWeightDiffs(filename1, filename2, h_0, z_min, z_max, Nl, title = "Weight w_max = h_max + h_0 * z_max # read weight matrix data - connections1, h1, z1, v1 = readWeightMatrixData(filename1, Nl) - connections2, h2, z2, v2 = readWeightMatrixData(filename2, Nl) + connections1, h1, z1, v1 = readWeightMatrixData(filename1, Nl_exc) + connections2, h2, z2, v2 = readWeightMatrixData(filename2, Nl_exc) if (connections1 != connections2).any(): print("Not the same connectivity, plot cannot be created!") @@ -274,7 +274,7 @@ def plotWeightDiffs(filename1, filename2, h_0, z_min, z_max, Nl, title = "Weight v_max = np.amax(v) if v_max == 0.0: v_max = 1.0 - v_data = v.reshape(int(Nl),int(Nl)) / v_max + v_data = v.reshape(int(Nl_exc),int(Nl_exc)) / v_max # plotting plt.figure(figsize=(20,12)) @@ -324,10 +324,10 @@ def plotWeightDiffs(filename1, filename2, h_0, z_min, z_max, Nl, title = "Weight # h_0: initial early-phase weight # z_min: minimum late-phase weight # z_max: maximum late-phase weight -# Nl: number of neurons in one row/column +# Nl_exc: number of excitatory neurons in one row/column # already_averaged: specifies if a data file shall be created # title [optional]: main title of the figure -def plotAveragedWeights(filename, h_0, z_min, z_max, Nl, already_averaged, title = "Averaged incoming and outgoing weights"): +def plotAveragedWeights(filename, h_0, z_min, z_max, Nl_exc, already_averaged, title = "Averaged incoming and outgoing weights"): # colormaps for h and z cmh0center = shiftedColorMap(cm.seismic, start=0, midpoint=0.5, stop=1.0, name='h0center') @@ -347,26 +347,26 @@ def plotAveragedWeights(filename, h_0, z_min, z_max, Nl, already_averaged, title rawdata = rawdata.split('\n') nn = len(rawdata)-1 - if nn != 2*Nl*Nl: + if nn != 2*Nl_exc*Nl_exc: print('Data file error in "' + filename + '"') print(nn) f.close() exit() - v = np.zeros((Nl,Nl)) - h_inc = np.zeros((Nl,Nl)) - h_out = np.zeros((Nl,Nl)) - z_inc = np.zeros((Nl,Nl)) - z_out = np.zeros((Nl,Nl)) - w_inc = np.zeros((Nl,Nl)) - w_out = np.zeros((Nl,Nl)) + v = np.zeros((Nl_exc,Nl_exc)) + h_inc = np.zeros((Nl_exc,Nl_exc)) + h_out = np.zeros((Nl_exc,Nl_exc)) + z_inc = np.zeros((Nl_exc,Nl_exc)) + z_out = np.zeros((Nl_exc,Nl_exc)) + w_inc = np.zeros((Nl_exc,Nl_exc)) + w_out = np.zeros((Nl_exc,Nl_exc)) for n in range(nn): - n2 = n % (Nl*Nl) - i = (n2 - (n2 % Nl)) // Nl # row number - j = n2 % Nl # column number + n2 = n % (Nl_exc*Nl_exc) + i = (n2 - (n2 % Nl_exc)) // Nl_exc # row number + j = n2 % Nl_exc # column number - if n < Nl*Nl: + if n < Nl_exc*Nl_exc: values = rawdata[n].split() h_inc[i][j] = logScale_h(float(values[0]), h_0) / h_max h_out[i][j] = logScale_h(float(values[1]), h_0) / h_max @@ -459,26 +459,26 @@ def plotAveragedWeights(filename, h_0, z_min, z_max, Nl, already_averaged, title else: # read weight matrix data - connections, h, z, v = readWeightMatrixData(filename, Nl) + connections, h, z, v = readWeightMatrixData(filename, Nl_exc) # change filename filename_av = filename.replace('_net_', '_net_av_') # find firing rate maximum and reshape array v_max = np.amax(v) - v_data = v.reshape(int(Nl),int(Nl)) + v_data = v.reshape(int(Nl_exc),int(Nl_exc)) # average incoming (axis=0) synaptic weights per neuron con_count = np.sum(connections, axis=0) con_count[con_count == 0] = 1 - h_incoming = np.array(np.sum(h, axis=0) / con_count).reshape(int(Nl),int(Nl)) - z_incoming = np.array(np.sum(z, axis=0) / con_count).reshape(int(Nl),int(Nl)) + h_incoming = np.array(np.sum(h, axis=0) / con_count).reshape(int(Nl_exc),int(Nl_exc)) + z_incoming = np.array(np.sum(z, axis=0) / con_count).reshape(int(Nl_exc),int(Nl_exc)) # average outgoing (axis=1) synaptic weights per neuron con_count = np.sum(connections, axis=1) con_count[con_count == 0] = 1 - h_outgoing = np.array(np.sum(h, axis=1) / con_count).reshape(int(Nl),int(Nl)) - z_outgoing = np.array(np.sum(z, axis=1) / con_count).reshape(int(Nl),int(Nl)) + h_outgoing = np.array(np.sum(h, axis=1) / con_count).reshape(int(Nl_exc),int(Nl_exc)) + z_outgoing = np.array(np.sum(z, axis=1) / con_count).reshape(int(Nl_exc),int(Nl_exc)) # plotting plt.figure(figsize=(6,12)) @@ -560,25 +560,6 @@ def plotAveragedWeights(filename, h_0, z_min, z_max, Nl, already_averaged, title f.write("z_outgoing =\r\n" + str(z_outgoing) + "\r\n\r\n\r\n") f.close() - -# getRhombCore -# Returns the neurons belonging to a rhomb-shaped core in a quadratic grid -# core_center: the central neuron of the rhomb -# core_radius: the "radius" of the rhomb -# Nl: number of neurons in one row/column -# return: the neurons belonging to the core in a numpy array -def getRhombCore(core_center, core_radius, Nl): - - core = np.array([], dtype=np.int32) - core_size = 2*core_radius**2 + 2*core_radius + 1 - - for i in range(-core_radius, core_radius+1, 1): - num_cols = (core_radius-abs(i)) - - for j in range(-num_cols, num_cols+1, 1): - core = np.append(core, np.array([core_center+i*Nl+j])) - return core - # earlyPhaseWeightsFromCore # Returns all the early-phase synaptic weights incoming to neuron i from core neurons # i: neuron index @@ -669,10 +650,10 @@ def latePhaseWeightsToCore(i, adj, z, core): # h_0: initial early-phase weight # z_min: minimum late-phase weight # z_max: maximum late-phase weight -# Nl: number of neurons in one row/column -# r_CA: radius of the cell assembly core +# Nl_exc: number of excitatory neurons in one row/column +# s_CA: size of the cell assembly core # title [optional]: main title of the figure -def plotAveragedSubPopWeights(filename, h_0, z_min, z_max, Nl, r_CA, title = "Averaged incoming weights from subpopulations"): +def plotAveragedSubPopWeights(filename, h_0, z_min, z_max, Nl_exc, s_CA, title = "Averaged incoming weights from subpopulations"): # colormaps for h and z cmh0center = shiftedColorMap(cm.seismic, start=0, midpoint=0.5, stop=1.0, name='h0center') @@ -683,7 +664,7 @@ def plotAveragedSubPopWeights(filename, h_0, z_min, z_max, Nl, r_CA, title = "Av w_max = h_max + h_0 * z_max # read weight matrix data - connections, h, z, v = readWeightMatrixData(filename, Nl) + connections, h, z, v = readWeightMatrixData(filename, Nl_exc) print("=========================\nData from: " + filename) # change filename @@ -693,19 +674,19 @@ def plotAveragedSubPopWeights(filename, h_0, z_min, z_max, Nl, r_CA, title = "Av v_max = np.amax(v) if v_max == 0.0: v_max = 1.0 - v_data = v.reshape(int(Nl),int(Nl)) / v_max + v_data = v.reshape(int(Nl_exc),int(Nl_exc)) / v_max # average synaptic weight from core neurons and from control neurons for all neurons - core = getRhombCore(820, r_CA, Nl) + core = np.arange(s_CA) - h_core_inc = np.zeros((Nl*Nl)) - z_core_inc = np.zeros((Nl*Nl)) - #h_core_out = np.zeros((Nl*Nl)) - #z_core_out = np.zeros((Nl*Nl)) - h_control_inc = np.zeros((Nl*Nl)) - z_control_inc = np.zeros((Nl*Nl)) + h_core_inc = np.zeros((Nl_exc*Nl_exc)) + z_core_inc = np.zeros((Nl_exc*Nl_exc)) + #h_core_out = np.zeros((Nl_exc*Nl_exc)) + #z_core_out = np.zeros((Nl_exc*Nl_exc)) + h_control_inc = np.zeros((Nl_exc*Nl_exc)) + z_control_inc = np.zeros((Nl_exc*Nl_exc)) - for i in range(Nl*Nl): + for i in range(Nl_exc*Nl_exc): weights = earlyPhaseWeightsFromCore(i, connections, h, core) if weights != []: h_core_inc[i] = np.sum(weights) / len(weights) @@ -730,10 +711,10 @@ def plotAveragedSubPopWeights(filename, h_0, z_min, z_max, Nl, r_CA, title = "Av # compute mean synaptic weight from core to core, from core to control, from control to core and from control to control - core_mask = np.in1d(np.arange(Nl*Nl), core) # boolean mask of exc. population, entries are True for core neurons + core_mask = np.in1d(np.arange(Nl_exc*Nl_exc), core) # boolean mask of exc. population, entries are True for core neurons control_mask = np.logical_not(core_mask) # boolean mask of exc. population, entries are True for control neurons core_size = len(core) - control_size = Nl*Nl - core_size + control_size = Nl_exc*Nl_exc - core_size mean_h_core_core = np.sum(h_core_inc[core_mask]) / core_size mean_h_core_control = np.sum(h_core_inc[control_mask]) / control_size @@ -756,12 +737,12 @@ def plotAveragedSubPopWeights(filename, h_0, z_min, z_max, Nl, r_CA, title = "Av print("mean_z_control_control = " + str(mean_z_control_control)) # reshape neuron arrays for plotting - h_core_inc = h_core_inc.reshape(int(Nl),int(Nl)) - z_core_inc = z_core_inc.reshape(int(Nl),int(Nl)) - #h_core_out = h_core_out.reshape(int(Nl),int(Nl)) - #z_core_out = z_core_out.reshape(int(Nl),int(Nl)) - h_control_inc = h_control_inc.reshape(int(Nl),int(Nl)) - z_control_inc = z_control_inc.reshape(int(Nl),int(Nl)) + h_core_inc = h_core_inc.reshape(int(Nl_exc),int(Nl_exc)) + z_core_inc = z_core_inc.reshape(int(Nl_exc),int(Nl_exc)) + #h_core_out = h_core_out.reshape(int(Nl_exc),int(Nl_exc)) + #z_core_out = z_core_out.reshape(int(Nl_exc),int(Nl_exc)) + h_control_inc = h_control_inc.reshape(int(Nl_exc),int(Nl_exc)) + z_control_inc = z_control_inc.reshape(int(Nl_exc),int(Nl_exc)) # plotting plt.figure(figsize=(6,12)) @@ -858,10 +839,10 @@ def plotAveragedSubPopWeights(filename, h_0, z_min, z_max, Nl, r_CA, title = "Av # h_0: initial early-phase weight # z_min: minimum late-phase weight # z_max: maximum late-phase weight -# Nl: number of neurons in one row/column -# r_CA: radius of the cell assembly core +# Nl_exc: number of excitatory neurons in one row/column +# s_CA: size of the cell assembly core # title [optional]: main title of the figure -def plotTotalSubPopWeights(filename, h_0, z_min, z_max, Nl, r_CA, title = "Total incoming weights from subpopulations"): +def plotTotalSubPopWeights(filename, h_0, z_min, z_max, Nl_exc, s_CA, title = "Total incoming weights from subpopulations"): # colormaps for h and z cmh0center = shiftedColorMap(cm.seismic, start=0, midpoint=0.5, stop=1.0, name='h0center') @@ -872,7 +853,7 @@ def plotTotalSubPopWeights(filename, h_0, z_min, z_max, Nl, r_CA, title = "Total w_max = h_max + h_0 * z_max # read weight matrix data - connections, h, z, v = readWeightMatrixData(filename, Nl) + connections, h, z, v = readWeightMatrixData(filename, Nl_exc) print("=========================\nData from: " + filename) # change filename @@ -882,19 +863,19 @@ def plotTotalSubPopWeights(filename, h_0, z_min, z_max, Nl, r_CA, title = "Total v_max = np.amax(v) if v_max == 0.0: v_max = 1.0 - v_data = v.reshape(int(Nl),int(Nl)) / v_max + v_data = v.reshape(int(Nl_exc),int(Nl_exc)) / v_max # average synaptic weight from core neurons and from control neurons for all neurons - core = getRhombCore(820, r_CA, Nl) + core = np.arange(s_CA) - h_core_inc = np.zeros((Nl*Nl)) - z_core_inc = np.zeros((Nl*Nl)) - #h_core_out = np.zeros((Nl*Nl)) - #z_core_out = np.zeros((Nl*Nl)) - h_control_inc = np.zeros((Nl*Nl)) - z_control_inc = np.zeros((Nl*Nl)) + h_core_inc = np.zeros((Nl_exc*Nl_exc)) + z_core_inc = np.zeros((Nl_exc*Nl_exc)) + #h_core_out = np.zeros((Nl_exc*Nl_exc)) + #z_core_out = np.zeros((Nl_exc*Nl_exc)) + h_control_inc = np.zeros((Nl_exc*Nl_exc)) + z_control_inc = np.zeros((Nl_exc*Nl_exc)) - for i in range(Nl*Nl): + for i in range(Nl_exc*Nl_exc): weights = earlyPhaseWeightsFromCore(i, connections, h, core) h_core_inc[i] = np.sum(weights) @@ -914,12 +895,12 @@ def plotTotalSubPopWeights(filename, h_0, z_min, z_max, Nl, r_CA, title = "Total z_control_inc[i] = np.sum(weights) # reshape neuron arrays for plotting - h_core_inc = h_core_inc.reshape(int(Nl),int(Nl)) - z_core_inc = z_core_inc.reshape(int(Nl),int(Nl)) - #h_core_out = h_core_out.reshape(int(Nl),int(Nl)) - #z_core_out = z_core_out.reshape(int(Nl),int(Nl)) - h_control_inc = h_control_inc.reshape(int(Nl),int(Nl)) - z_control_inc = z_control_inc.reshape(int(Nl),int(Nl)) + h_core_inc = h_core_inc.reshape(int(Nl_exc),int(Nl_exc)) + z_core_inc = z_core_inc.reshape(int(Nl_exc),int(Nl_exc)) + #h_core_out = h_core_out.reshape(int(Nl_exc),int(Nl_exc)) + #z_core_out = z_core_out.reshape(int(Nl_exc),int(Nl_exc)) + h_control_inc = h_control_inc.reshape(int(Nl_exc),int(Nl_exc)) + z_control_inc = z_control_inc.reshape(int(Nl_exc),int(Nl_exc)) # plotting plt.figure(figsize=(6,12))