diff --git a/MANIFEST.in b/MANIFEST.in
index 315b0496..6833e473 100644
--- a/MANIFEST.in
+++ b/MANIFEST.in
@@ -3,3 +3,4 @@ include LICENSE
include MANIFEST.in
include etrago/tools/*.json
include *.txt
+include etrago/*.json
diff --git a/doc/about.rst b/doc/about.rst
index 9999962c..3bad066d 100644
--- a/doc/about.rst
+++ b/doc/about.rst
@@ -2,15 +2,17 @@
About eTraGo
============
-Optimization of flexibility options for transmission grids based on PyPSA.
+eTraGo stands for **e**\lectric **Tra**\nsmission **G**\rid **o**\ptimization.
-A speciality in this context is that transmission grids are described by the
-380, 220 and 110 kV in Germany. Conventionally the 110kV grid is part of the
-distribution grid. The integration of the transmission and 'upper' distribution
-grid is part of eTraGo.
+The python package eTraGo provides optimization strategies of flexibility options
+for transmission grids based on PyPSA. A peculiarity in this context is that
+the German transmission grid is described by the 380, 220 and 110 kV voltage levels.
+Conventionally the 110kV grid is part of the distribution grid. The integration of
+the transmission and ‘upper’ distribution grid is part of eTraGo.
The focus of optimization are flexibility options with a special focus on
-energy storages and grid expansion measures.
+energy storage and grid expansion measures.
+
The open_eGo project
@@ -21,11 +23,76 @@ This software project is part of the research project
The OpenEnergy Platform
=======================
-With in this project we developted the OpenEnergy Platform which this software
+Within this project we developed the OpenEnergy Platform which this software
is using in order to get and store the in- and output data. Before you start to
-calculate a registration on the platform is needed. For more see
+calculate a registration on the platform is needed. For more information see
`openenergy-platform `_ and login.
+The OpenEnergy platform mainly addresses students, researchers and scientists in
+the field of energy modelling and analytics as well as interested persons in
+those fields. The platform provides great tools to make your energy system
+modelling process transparent. All data of the open_eGo project are stored at
+this platform.
+`Learn more about the database access `_.
+
+
+
+
+
+Model overview
+==============
+
+
+
+.. figure:: images/eTraGo_model.png
+ :align: center
+ :scale: 75%
+
+
+eDisGo
+======
+The python package eDisGo provides a toolbox for analysis and optimization
+of distribution grids. It is closely related to the python project Ding0 as this
+project is currently the single data source for eDisGo providing synthetic
+grid data for whole Germany. `Learn more here `_.
+
+
+eGo
+===
+
+The python package eGo is a toolbox and application which connects the tool eTraGo
+(optimization of flexibility options at transmission grid level)
+and eDisGo (optimization of distribution grids). All those python
+packages are part of the research project
+`open_eGo `_.
+`Learn more here `_.
+
+
+Dataprocessing
+==============
+
+For the open_eGo project several python packages are developed which are feeded
+by the input data of the data processing. The Dataprocessing is writen in
+SQL and Python. `Learn more here `_.
+
+ego.io
+======
+
+The ego.io serves as a SQLAlchemy Interface to the OpenEnergy database (oedb). The
+oedb table ORM objects are defined here and small helpers for io tasks are contained.
+`Learn more here `_.
+
+
+Dingo
+=====
+
+The DIstribution Network GeneratOr (Ding0) is a tool to generate synthetic
+medium and low voltage power distribution grids based on open
+(or at least accessible) data.
+`Learn more here `_.
+
+
+
LICENSE
=======
diff --git a/doc/developer_notes.rst b/doc/developer_notes.rst
index d66820d4..ee4aea94 100644
--- a/doc/developer_notes.rst
+++ b/doc/developer_notes.rst
@@ -7,9 +7,19 @@ Developer notes
Installation for Developers
===========================
-The best way is to use a virtual environment. see:
-Step 2) Clone the source code from github
+.. note::
+ Installation is primarly tested on (Ubuntu like) linux OS.
+
+1. If you like, create a virtual environment (where you like it) and activate it (if you do not use venv start with 2.):
+
+.. code-block:: bash
+
+ $ virtualenv --clear -p python3.5 etrago``
+ $ cd etrago/
+ $ source bin/activate
+
+2. Clone the source code from github
.. code-block::
@@ -25,28 +35,3 @@ With your activated environment `cd` to the cloned directory and run:
This will install all needed packages into your environment.
Now you should be ready to go.
-
-Windows or Mac OSX users
-************************
-
-
-
-- download and install github (https://desktop.github.com)
-
-- open GitHubDesktop and clone eTraGo from open_eGo
-
-- open an anaconda prompt as administrator and run:
-
- ```
- pip install -e path/to/Github/Folder/eTraGo
- ```
-
- ```
- pip install pandas == 0.20.3 (version 0.21 is not working!)
- ```
-
-- to check if everything is installed run:
-
- ```
- pip freeze
- ```
diff --git a/doc/howToUse.rst b/doc/howToUse.rst
index 9ba23e30..ea5d454a 100644
--- a/doc/howToUse.rst
+++ b/doc/howToUse.rst
@@ -3,30 +3,29 @@
How to use eTraGo?
==================
-After you installed eTraGo you can run eTraGo via terminal with
-``$ cd..//eTrago/etrago/`` and ``python3 appl.py``.
-The program will execute following functions:
+After you installed eTraGo you would typically start optimization runs by
+executing the ‘appl.py’ wich is situated in
+``./eTrago/etrago/`` (e.g by ``python3 appl.py``).
-.. code-block:: python
-
- # execute etrago function
- network = etrago(args)
- # plots
- # make a line loading plot
- plot_line_loading(network)
- # plot stacked sum of nominal power for each generator type and timestep
- plot_stacked_gen(network, resolution="MW")
- # plot to show extendable storages
- storage_distribution(network)
+The ‘appl.py’ is used as a simple user interface. Here
+parameters, calculation methods and scenario settings are set in a python
+dictionary called 'args'. It is crucial to understand these parameters.
+For example some of them contradict the usage of others.
+You find the documentation of all defined parameters from the 'args' here:
+:meth:`etrago.appl.etrago`.
+The appl.py contains the etrago(args) function which uses the
+defined 'args' dictionary to start the desired calculation.
-Overview of setting arguments
-=============================
+Afterwards a PyPSA network will contain all results. You can use
+several plotting functions from the :meth:`etrago.tools.plot` in order
+to visualize the results. For example
+the :meth:`etrago.tools.plot.plot_line_loading` plots
+the relative line loading in % of all AC lines and DC links of the network.
-The tool eTraGo is using a main python script ‘appl.py’ in which your
-parameters, calculation methods and scenario settings are set in a python
-dictionary called args. The documentation of the program settings can you
-find here: :meth:`etrago.appl.etrago`.
+To save the results you can use an interface to the oedb or write them
+simply to csv files. These functionalites can be specified
+also in :meth:`etrago.appl.etrago`.
.. _Examples:
@@ -38,4 +37,4 @@ Examples and tutorial notebooks
.. toctree::
:maxdepth: 7
- OpenMod
+ OpenMod
diff --git a/doc/images/eTraGo_model.png b/doc/images/eTraGo_model.png
new file mode 100644
index 00000000..1be7f3a9
Binary files /dev/null and b/doc/images/eTraGo_model.png differ
diff --git a/doc/index.rst b/doc/index.rst
index c895ade7..241ab540 100644
--- a/doc/index.rst
+++ b/doc/index.rst
@@ -15,7 +15,7 @@ Welcome to eTraGo's documentation!
heavy development.
.. toctree::
- :maxdepth: 7
+ :maxdepth: 2
about
installation
diff --git a/doc/installation.rst b/doc/installation.rst
index de876539..be0390a2 100644
--- a/doc/installation.rst
+++ b/doc/installation.rst
@@ -1,9 +1,11 @@
============
Installation
============
-If you have a working Python3 environment, use pypi to install the latest
-eTraGo version. We highly recommend you to use a virtual environment.
-Use following pip command in order to install eTraGo:
+eTraGo is designed as a Python package therefore it is mandatory to have
+`Python 3 `_ installed. If you have a
+working Python3 environment, use pypi to install the latest eTraGo version.
+We highly recommend you to use a virtual environment. Use following pip
+command in order to install eTraGo:
.. code-block:: bash
@@ -11,7 +13,6 @@ Use following pip command in order to install eTraGo:
-
Using virtual environment
=========================
@@ -29,7 +30,8 @@ Linux and Ubuntu
================
The Package eTraGo is tested with Ubuntu 16.04 and 18.04 inside the virtual
-environments of *virtualenv*. The installation is shown above.
+environments of `virtualenv `_.
+The installation is shown above.
@@ -37,55 +39,73 @@ Windows or Mac OSX users
========================
For Windows and/or Mac OSX user we highly recommend to install and use Anaconda
-for you Python3 installation. First install anaconda inclusing python 3.x
-version from https://www.anaconda.com/download/ and open an anaconda prompt as
-administrator and run:
+for you Python3 installation. First install anaconda inclusing python 3.5 or
+higher version from https://www.anaconda.com/download/ and open an anaconda
+prompt as administrator and run:
.. code-block:: bash
$ conda install pip
$ conda config --add channels conda-forge
$ conda install shapely
+ $ pip3 install eTraGo --process-dependency-links
The full Documentation can be found
-`on this page.`_. We use Anaconda
+`on this page `_ . We use Anaconda
with an own environment in order to reduze problems with Packages and different
-versions on our system. Learn more about (`Anacona environments
- `_).
-
-
-
+versions on our system. Learn more about
+`Anacona `_
+environments.
-Setup ego.io
-=============
+Setup database connection
+=========================
+The package ``ego.io`` gives you a python SQL-Alchemy representations of
+the _OpenEnergy-Database_ (oedb) and access to it by using the
+`oedialect `_ a SQL-Alchemy binding
+Python package for the REST-API used by the OpenEnergy Platform (OEP). Your API
+access / login data will be saved in the folder ``.egoio`` in the file
+``config.ini``. You can create a new account on
+`openenergy-platform.org/login `_.
- [oedb]
-
- username = YourOEDBUserName
-
- database = oedb
-
- host = oe2.iws.cs.ovgu.de
+oedialect connection
+--------------------
- port = 5432
+.. code-block:: desktop
- pw = YourOEDBPassword
+ [oedb]
+ dialect = oedialect
+ username =
+ database = oedb
+ host = openenergy-platform.org
+ port = 80
+ password =
- [local]
- username = YourLocalUserName
+Local database connection
+-------------------------
- database = YourLocalDatabaseName
+.. code-block:: desktop
- host = 127.0.0.1
+ [local]
+ username = YourOEDBUserName
+ database = YourLocalDatabaseName
+ host = localhost or 127.0.0.1
+ port = 5433
+ pw = YourLocalPassword
- port = 5432
- pw = YourLocalPassword
+Old developer connection
+-------------------------
+.. code-block:: desktop
-when you just calculate local or on the oedb you just need this section
+ [oedb]
+ username = YourOEDBUserName
+ database = oedb
+ host = oe2.iws.cs.ovgu.de
+ port = 5432
+ pw = YourOEDBPassword
diff --git a/doc/theoretical_background.rst b/doc/theoretical_background.rst
index 5216c2aa..d715411c 100644
--- a/doc/theoretical_background.rst
+++ b/doc/theoretical_background.rst
@@ -8,8 +8,8 @@ Definitions and Units
=====================
eTraGo executes the Open Source software PyPSA to perform power flow
-simulations and uses their definitions and
-`units`.
+simulations and uses its definitions and
+`units `_.
@@ -18,10 +18,10 @@ Assumptions on Data
eTraGo fetches its necessary input data from the OpenEnergy Platform including
load, generation, grid and scenario-related data. More details can be found in
-the `Data-Processing `.
+the `Data-Processing `_.
As overview, the Open Source grid structure is developed by processing data
-from `OpenStreetMap ` (OSM) to obtain
+from `OpenStreetMap (OSM) `_ to obtain
geo-referenced locations of substations and links equal or above the 110 kV
voltage level. OSM also provides information about residential, retail,
industrial and agricultural areas which is used with standardized profiles to
@@ -33,15 +33,13 @@ follows assumptions for the year 2035 by [NEP2015]_ and eGo100 assumes to
operate the future energy system completely by renewables [ehighway2050]_.
-
-Methodology
+Methods
===========
-
PyPSA
-----
The power flow simulations are performed by the Open Source tool
-`PyPSA ` with a linear approximation for the
+`PyPSA `_ with a linear approximation for the
optimization of power flows in general. Expecting that eTraGo fulfills the
assumptions to perfom a LOPF (small voltage angle differences, branch
resistances negligible to their reactances, voltage magnitudes can be kept at
@@ -59,18 +57,18 @@ This method maps an input network to an output network with the nodes of the
extra-high voltage level. All nodes with a voltage level below the extra-high
voltage level are mapped to their nearest neighboring node in the extra-high
voltage level with the
-`dijkstra algorithm `
+`dijkstra algorithm `_
(110 kV ---> 220,380 kV).
K-Means Clustering
^^^^^^^^^^^^^^^^^^
-This `method` maps an input
+This `method `_ maps an input
network to a new output network with an adjustable number of nodes and new
coordinates. The algorithm sets these coordinates randomly and minimizes a
certain parameter like for example the distances between old coordinates and
their nearest neighbor in the set of new coordinates. The method was
-implemented by `Hoersch et al. ` within
+implemented by `Hoersch et al. `_ within
PyPSA.
Snapshot skipping
@@ -84,7 +82,7 @@ snapshots in the time series.
Snapshot-Clustering
^^^^^^^^^^^^^^^^^^^
This method aggregate given time series for various time intervals like i.e.
-days using the `tsam ` package. Contrary to
+days using the `tsam `_ package. Contrary to
snapshot skipping, this approach averages a certain period of snapshots
instead of choosing a representative snapshot.
@@ -93,7 +91,7 @@ Storage expansion
-----------------
To evaluate the amount of storage units in future energy systems, the possible
installation of new storage units at every node in the network is allowed. The
-size and operation of these storages are part of the optimization problem.
+size and operation of these storages are part of the optimization problem.
Two types of storage technologies are considered - batteries and hydrogen in
underground caverns. Li-Ion battery storages as representatives for short-term
@@ -106,44 +104,59 @@ in the underground. The storage parameters for both types are reached by
Grid expansion
--------------
-tbd line_grouping? branch_capacity factor?
+The grid expansion is realized by extending the capacities of existing
+lines and substations. These capacities are regarded as part of the
+optimization problem, whereby the possible extension is unlimited. With respect
+to the different voltage levels and lengths MVA-specific costs are considered
+in the linear optimization of the power flow. Besides, several planned grid
+expansion scenarios from the German grid development plan can be considered as
+possible additional power lines.
-Features
+Miscellaneous Features
--------
-tbd (solver options, generator_noise, load_cluster, line_grouping, branch_capacity factor, load shedding)
-
+Several features were developed to enhance the functionality of eTraGo. As
+appropriate computer setting, the 'solver_options' and a 'generator_noise' are
+possible arguments. The latter adds a reproducible small random noise to the
+marginal costs of each generator in order to prevent an optima plateau. The
+specific solver options depend on the applied solver like for example Gurobi,
+CPLEX or GLPK. Considering reproducibility, the 'load_cluster' argument
+enables to load a former calculated clustered network. Besides,
+'line_grouping' provides a grouping of lines which connect the same buses and
+the 'branch_capacity_factor' adds a factor to adapt all line capacities. The
+'load_shedding' argument is used for debugging complex grids in order to avoid
+infeasibilities. It introduces a very expensive generator at each bus to meet
+the demand. When optimizing storage units and grid expansion without limiting
+constraints, the need for load shedding should not be existent. The
+'minimize_loading' argument forces to minimize the loading of the lines next
+to the costs. 'Parallelization' provides the opportunity to devide the
+optimization problem into a given number of sub-problems. For a group of
+snapshots the problem will be solved separately. This functionality can
+only be used for problems which do not have dependencies from one snapshot
+to another. Therefore this option can not be used
+with the optimization of storage units due to their state of charge.
References
==========
.. [NEP2015] Übertragungsnetzbetreiber Deutschland. (2015).:
- *Netzentwicklungsplan Strom 2025*, Version 2015, 1. Entwurf, 2015. (https://
- www.netzentwicklungsplan.de/sites/default/files/paragraphs-files/NEP_2025_
- 1_Entwurf_Teil1_0_0.pdf)
+ *Netzentwicklungsplan Strom 2025*, Version 2015, 1. Entwurf, 2015.
+ (``_)
.. [coastdat-2] coastDat-2 (2017).:
- Hindcast model http://www.coastdat.de/data/index.php.en
-
-.. [FlEnS] Bunke, Wolf-Dieter, Martin Söthe, Marion Wingenbach, and Cord Kaldemeyer. 2018.:
- *“(Fl)ensburg (En)ergy (S)cenarios - open_eGo Scenarios for 2014/2035/2050.”* Open Science Framework. June 13. doi:10.17605/OSF.IO/BPF36.
-
-.. [opsd-conv] `Open Power System Data `_. 2016.:
- Data provided by Open Power System Data - Data Package Conventional power plants, version 2016-10-27. Primary data from BNetzA Kraftwerksliste,
- Umweltbundesamt Datenbank Kraftwerke in Deutschland.
-
-.. [opsd-res] `Open Power System Data `_. 2017.:
- Data provided by Open Power System Data - Data Package Renewable power plants, early version 2016-02-10. Primary data from BNetzA, BNetzA_PV, TransnetBW, TenneT, Amprion, 50Hertz, Netztransparenz.de, Postleitzahlen Deutschland, Energinet.dk, Energistyrelsen, GeoNames, French Ministery of the Environment, Energy and the Sea, OpenDataSoft, Urzad Regulacji Energetyki (URE)
+ Hindcast model ``_
.. [ehighway2050] e-Highway2050. (2015).:
- e-HIGHWAY 2050 Modular Development Plan of the Pan-European Transmission System 2050 - database per country. Retrieved from http://www.e-highway2050.eu/fileadmin/documents/Results/e-Highway_database_per_country-08022016.xlsx
-
-.. [christ2017] Christ, M. (2017).:
- Integration sozial-ökologischer Faktoren in die Energiesystemmodellierung am Beispiel von Entwicklungspfaden für den Windenergieausbau in Deutschland (PhD Thesis). Europa-Universität Flensburg.
-
-.. [BMWi] tbd
+ e-HIGHWAY 2050 Modular Development Plan of the Pan-European Transmission
+ System 2050 - database per country. Retrieved from
+ (``_)
-.. [Acatech2015] tbd
+.. [Acatech2015] 'Flexibilitätskonzepte für die Stromversorgung 2050
+ ``_'
-.. [BGR] tbd
+.. [BGR] 'Salzstruktur in Norddeutschland <>'_. 2015.:
+ Data provided by the Federal Institute for Geosciences and Natural
+ Resources (Bundesanstalt für Geowissenschaften und Rohstoffe, BGR)
diff --git a/doc/whatsnew.rst b/doc/whatsnew.rst
index d0b3a781..8a5cd78e 100644
--- a/doc/whatsnew.rst
+++ b/doc/whatsnew.rst
@@ -8,6 +8,7 @@ These are new features and improvements of note in each release.
:local:
:backlinks: top
+.. include:: whatsnew/v0_7_0.rst
.. include:: whatsnew/v0_6_1.rst
.. include:: whatsnew/v0_6.rst
.. include:: whatsnew/v0_5_1.rst
diff --git a/doc/whatsnew/v0_6.rst b/doc/whatsnew/v0_6.rst
index 0f6aea4e..c3e89128 100644
--- a/doc/whatsnew/v0_6.rst
+++ b/doc/whatsnew/v0_6.rst
@@ -4,7 +4,7 @@ eTraGo now enables combined grid and storage expansion, snapshot clustering and
Added features
--------------
-* A part from optimizing the investment of storages it is now also possible to optimize grid expansion investments. In this context we added an argument 'extendable' which expects an array of the different component types you want to optimize. This argument corresponds to functions in the new extendable.py file. It is possible to choose from expansion strategies which are defined within that sub-package. Capital costs for new grid components can be defined and are annualized by means of interest rate, componet lifetime and operation period.
+* A part from optimizing the investment of storages it is now also possible to optimize grid expansion investments. In this context we added an argument 'extendable' which expects an array of the different component types you want to optimize. This argument corresponds to functions in the new extendable.py file. It is possible to choose from expansion strategies which are defined within that sub-package. Capital costs for new grid components can be defined and are annualized by means of interest rate, component lifetime and operation period.
* The k-means network clustering ('network_clustering_kmeans') has been refactored. It is now possible to reproduce busmaps by csv-importing already defined busmaps. Consequently it is possible to write busmaps. Here the argument 'load_cluster' was introduced. Moreover it is possible read and write bus_weightings. This helps to e.g. run a future scenario but using a bus weighting of the status quo. Moreover, the remove_stubs function from PyPSA is now easily usable in eTraGo.
* The snapshot_clustering can now be used in order to reduce the temporal complexity. Typical days are taken in order to represent the entire year. Here the package tsam (developed by Leander Kotzur) is used. Standardly a hierarchical clustering method is used which is e.g. described by Nahmacher et al. ( see: https://www.sciencedirect.com/science/article/pii/S0360544216308556 ).
* Scenario variations from the oedb can be introduced. The argument 'scn_extension' will activate an extension scenario which adds components such as lines or generator to the base scenario. The 'scn_decommissioning' argument states whether you want to remove existing components. Right now, in the oedb two scenarios are accessible which represent the grid expansion (and the corresponding removal of existing equipment) planned by the German network development plan.
diff --git a/doc/whatsnew/v0_7_0.rst b/doc/whatsnew/v0_7_0.rst
new file mode 100644
index 00000000..e690bd5e
--- /dev/null
+++ b/doc/whatsnew/v0_7_0.rst
@@ -0,0 +1,16 @@
+Release 0.7.0 (September 6, 2018)
+++++++++++++++++++++++++++++
+eTraGo is able to produce feasible non-linear power flows based on optimization results and allows the disaggregation of clustered results to original spatial complexities.
+
+Added features
+--------------
+
+* The pf_post_lopf function was improved. Due to changes in the data set now the non-linear power flow (pf) creates feasible solutions. If network optimization is turned on, a second lopf which regards the updated reactances and optimizes only dispatch is performed before the pf is executed.
+* The disaggregation method was included. When using a network clustering method to reduce the spatial complexity of the given network, a disaggregation method can be used afterwards to distribute the nodal results (generation and storage timeseries) to the original complexity. The method 'disaggregation': 'uniform' can be used as an interface functionality for distribution grid planning tools like eDisGo.
+* For the network expansion it is now additionally possible to only optimize the German power lines or only the crossborder lines. Moreover one can choose to optimize only a predefined set of power lines which are identified by a worst-case analysis beforehand.
+* Intertemporal constraints can be applied to certain power plants. For different technologies certain parameters i.e. 'start_up_cost', 'start_up_fuel', 'min_up_time' and 'min_down_time' are defined in the ramp_limits function.
+* Crossborder lines can now easily be modelled as 'DC' links. Moreover the capacities of these lines can be adjusted with respect to a ACER report on thermal as well as net transfer capacities.
+* Thanks to @jankaeh manually the grid topology within the cities Stuttgart, Munich and Hannover was improved. Perspectively this function should be obsolete when openstreetmap and/or osmTGmod get better data coverage.
+* As an alternative to the normal editing of the calcualtion settings (args) within the appl.py it is now possible to load an args.json file.
+
+
diff --git a/etrago/appl.py b/etrago/appl.py
index 3560a15f..5e169692 100644
--- a/etrago/appl.py
+++ b/etrago/appl.py
@@ -24,11 +24,13 @@
the function etrago.
"""
-import numpy as np
-from numpy import genfromtxt
-import time
+
import datetime
import os
+import os.path
+import time
+import numpy as np
+import pandas as pd
__copyright__ = (
"Flensburg University of Applied Sciences, "
@@ -41,11 +43,21 @@
if 'READTHEDOCS' not in os.environ:
# Sphinx does not run this code.
# Do not import internal packages directly
+ from etrago.cluster.disaggregation import (
+ MiniSolverDisaggregation,
+ UniformDisaggregation)
+
+ from etrago.cluster.networkclustering import (
+ busmap_from_psql,
+ cluster_on_extra_high_voltage,
+ kmean_clustering)
+
from etrago.tools.io import (
NetworkScenario,
results_to_oedb,
extension,
decommissioning)
+
from etrago.tools.plot import (
plot_line_loading,
plot_stacked_gen,
@@ -54,8 +66,8 @@
gen_dist,
storage_distribution,
storage_expansion,
- extension_overlay_network,
- nodal_gen_dispatch)
+ nodal_gen_dispatch,
+ network_extension)
from etrago.tools.utilities import (
load_shedding,
@@ -66,50 +78,67 @@
pf_post_lopf,
loading_minimization,
calc_line_losses,
- group_parallel_lines)
- from etrago.tools.extendable import extendable
- from etrago.cluster.networkclustering import (
- busmap_from_psql, cluster_on_extra_high_voltage, kmean_clustering)
+ group_parallel_lines,
+ add_missing_components,
+ distribute_q,
+ set_q_foreign_loads,
+ clip_foreign,
+ foreign_links,
+ crossborder_capacity,
+ ramp_limits,
+ geolocation_buses,
+ get_args_setting)
+
+ from etrago.tools.extendable import extendable, extension_preselection
from etrago.cluster.snapshot import snapshot_clustering, daily_bounds
from egoio.tools import db
from sqlalchemy.orm import sessionmaker
+ import oedialect
+
args = { # Setup and Configuration:
'db': 'oedb', # database session
- 'gridversion': 'v0.4.2', # None for model_draft or Version number
+ 'gridversion': 'v0.4.5', # None for model_draft or Version number
'method': 'lopf', # lopf or pf
- 'pf_post_lopf': False, # perform a pf after a lopf simulation
+ 'pf_post_lopf': True, # perform a pf after a lopf simulation
'start_snapshot': 1,
'end_snapshot': 2,
'solver': 'gurobi', # glpk, cplex or gurobi
- 'solver_options': {}, # {} for default or dict of solver options
- 'scn_name': 'NEP 2035', # a scenario: Status Quo, NEP 2035, eGo100
+ 'solver_options': {'threads':4, 'method':2,'BarConvTol':1.e-5,
+ 'FeasibilityTol':1.e-6,
+ 'logFile':'gurobi_eTraGo.log'}, # {} for default or dict of solver options
+ 'scn_name': 'NEP 2035', # a scenario: Status Quo, NEP 2035, eGo 100
# Scenario variations:
- 'scn_extension': None, # None or extension scenario
- 'scn_decommissioning': None, # None or decommissioning scenario
- 'add_Belgium_Norway': False, # add Belgium and Norway
+ 'scn_extension': None, # None or array of extension scenarios
+ 'scn_decommissioning':None, # None or decommissioning scenario
# Export options:
'lpfile': False, # save pyomo's lp file: False or /path/tofolder
- 'results': False, # save results as csv: False or /path/tofolder
+ 'results': './results', # save results as csv: False or /path/tofolder
'export': False, # export the results back to the oedb
# Settings:
- 'extendable': None, # None or array of components to optimize
+ 'extendable': ['network', 'storages'], # Array of components to optimize
'generator_noise': 789456, # apply generator noise, False or seed number
'minimize_loading': False,
+ 'ramp_limits': False, # Choose if using ramp limit of generators
# Clustering:
- 'network_clustering_kmeans': False, # False or the value k for clustering
+ 'network_clustering_kmeans': 10, # False or the value k for clustering
'load_cluster': False, # False or predefined busmap for k-means
'network_clustering_ehv': False, # clustering of HV buses to EHV buses.
+ 'disaggregation': 'uniform', # or None, 'mini' or 'uniform'
'snapshot_clustering': False, # False or the number of 'periods'
# Simplifications:
'parallelisation': False, # run snapshots parallely.
'skip_snapshots': False,
'line_grouping': False, # group lines parallel lines
'branch_capacity_factor': 0.7, # factor to change branch capacities
- 'load_shedding': False, # meet the demand at very high cost
+ 'load_shedding': False, # meet the demand at very high cost
+ 'foreign_lines' : {'carrier': 'AC', 'capacity': 'osmTGmod'}, # dict containing carrier and capacity settings of foreign lines
'comments': None}
+args = get_args_setting(args, jsonpath = None)
+
+
def etrago(args):
"""The etrago function works with following arguments:
@@ -158,17 +187,19 @@ def etrago(args):
Schleswig-Holstein by adding the acronym SH to the scenario
name (e.g. 'SH Status Quo').
- scn_extension : str
+ scn_extension : NoneType or list
None,
- Choose an extension-scenario which will be added to the existing
+ Choose extension-scenarios which will be added to the existing
network container. Data of the extension scenarios are located in
extension-tables (e.g. model_draft.ego_grid_pf_hv_extension_bus)
with the prefix 'extension_'.
- Currently there are two overlay networks:
+ Currently there are three overlay networks:
'nep2035_confirmed' includes all planed new lines confirmed by the
Bundesnetzagentur
'nep2035_b2' includes all new lines planned by the
Netzentwicklungsplan 2025 in scenario 2035 B2
+ 'BE_NO_NEP 2035' includes planned lines to Belgium and Norway and adds
+ BE and NO as electrical neighbours
scn_decommissioning : str
None,
@@ -183,12 +214,7 @@ def etrago(args):
confirmed projects
'nep2035_b2' includes all lines that will be replaced in
NEP-scenario 2035 B2
-
- add_Belgium_Norway : bool
- False,
- State if you want to add Belgium and Norway as electrical neighbours.
- Currently, generation and load always refer to scenario 'NEP 2035'.
-
+
lpfile : obj
False,
State if and where you want to save pyomo's lp file. Options:
@@ -204,18 +230,23 @@ def etrago(args):
State if you want to export the results of your calculation
back to the database.
- extendable : NoneType or list
+ extendable : list
['network', 'storages'],
- Choose None or which components you want to optimize.
+ Choose components you want to optimize.
Settings can be added in /tools/extendable.py.
The most important possibilities:
'network': set all lines, links and transformers extendable
+ 'german_network': set lines and transformers in German grid
+ extendable
+ 'foreign_network': set foreign lines and transformers extendable
'transformers': set all transformers extendable
'overlay_network': set all components of the 'scn_extension'
extendable
'storages': allow to install extendable storages
(unlimited in size) at each grid node in order to meet
the flexibility demand.
+ 'network_preselection': set only preselected lines extendable,
+ method is chosen in function call
generator_noise : bool or int
@@ -226,6 +257,11 @@ def etrago(args):
minimize_loading : bool
False,
...
+ ramp_limits : bool
+ False,
+ State if you want to consider ramp limits of generators.
+ Increases time for solving significantly.
+ Only works when calculating at least 30 snapshots.
network_clustering_kmeans : bool or int
False,
@@ -275,6 +311,12 @@ def etrago(args):
is helpful when debugging: a very expensive generator is set to each
bus and meets the demand when regular
generators cannot do so.
+
+ foreign_lines : dict
+ {'carrier':'AC', 'capacity': 'osm_tGmod}'
+ Choose transmission technology and capacity of foreign lines:
+ 'carrier': 'AC' or 'DC'
+ 'capacity': 'osm_tGmod', 'ntc_acer' or 'thermal_acer'
comments : str
None
@@ -309,6 +351,21 @@ def etrago(args):
# add coordinates
network = add_coordinates(network)
+
+ # Set countrytags of buses, lines, links and transformers
+ network = geolocation_buses(network, session)
+
+ # Set q_sets of foreign loads
+ network = set_q_foreign_loads(network, cos_phi = 1)
+
+ # Change transmission technology and/or capacity of foreign lines
+ if args['foreign_lines']['carrier'] == 'DC':
+ foreign_links(network)
+ network = geolocation_buses(network, session)
+
+ if args['foreign_lines']['capacity'] != 'osmTGmod':
+ crossborder_capacity(network, args['foreign_lines']['capacity'],
+ args['branch_capacity_factor'])
# TEMPORARY vague adjustment due to transformer bug in data processing
if args['gridversion'] == 'v0.2.11':
@@ -319,7 +376,13 @@ def etrago(args):
# set extra_functionality to default
extra_functionality = None
+
+ # set disaggregated_network to default
+ disaggregated_network = None
+ # set clustering to default
+ clustering = None
+
if args['generator_noise'] is not False:
# add random noise to all generators
s = np.random.RandomState(args['generator_noise'])
@@ -335,67 +398,54 @@ def etrago(args):
if args['line_grouping']:
group_parallel_lines(network)
- # network clustering
- if args['network_clustering_ehv']:
- network.generators.control = "PV"
- busmap = busmap_from_psql(network, session, scn_name=args['scn_name'])
- network = cluster_on_extra_high_voltage(
- network, busmap, with_time=True)
-
- # k-mean clustering
- if not args['network_clustering_kmeans'] is False:
- network = kmean_clustering(
- network,
- n_clusters=args['network_clustering_kmeans'],
- load_cluster=args['load_cluster'],
- line_length_factor=1,
- remove_stubs=False,
- use_reduced_coordinates=False,
- bus_weight_tocsv=None,
- bus_weight_fromcsv=None)
-
# Branch loading minimization
if args['minimize_loading']:
extra_functionality = loading_minimization
-
- if args['skip_snapshots']:
- network.snapshots = network.snapshots[::args['skip_snapshots']]
- network.snapshot_weightings = network.snapshot_weightings[
- ::args['skip_snapshots']] * args['skip_snapshots']
-
+
+ # scenario extensions
if args['scn_extension'] is not None:
- network = extension(
- network,
- session,
- scn_extension=args['scn_extension'],
- start_snapshot=args['start_snapshot'],
- end_snapshot=args['end_snapshot'],
- k_mean_clustering=args['network_clustering_kmeans'])
-
+ for i in range(len(args['scn_extension'])):
+ network = extension(
+ network,
+ session,
+ version = args['gridversion'],
+ scn_extension=args['scn_extension'][i],
+ start_snapshot=args['start_snapshot'],
+ end_snapshot=args['end_snapshot'])
+ network = geolocation_buses(network, session)
+
+ # scenario decommissioning
if args['scn_decommissioning'] is not None:
network = decommissioning(
network,
session,
- scn_decommissioning=args['scn_decommissioning'],
- k_mean_clustering=args['network_clustering_kmeans'])
+ version = args['gridversion'],
+ scn_decommissioning=args['scn_decommissioning'])
- if args['add_Belgium_Norway']:
- network = extension(
- network,
- session,
- scn_extension='BE_NO_NEP 2035',
- start_snapshot=args['start_snapshot'],
- end_snapshot=args['end_snapshot'],
- k_mean_clustering=args['network_clustering_kmeans'])
+ # Add missing lines in Munich and Stuttgart
+ network = add_missing_components(network)
- if args['extendable'] is not None:
+ # investive optimization strategies
+ if args['extendable'] != []:
network = extendable(
- network,
- args['extendable'],
- args['scn_extension'])
+ network,
+ args)
network = convert_capital_costs(
network, args['start_snapshot'], args['end_snapshot'])
-
+
+ # skip snapshots
+ if args['skip_snapshots']:
+ network.snapshots = network.snapshots[::args['skip_snapshots']]
+ network.snapshot_weightings = network.snapshot_weightings[
+ ::args['skip_snapshots']] * args['skip_snapshots']
+
+ # snapshot clustering
+ if not args['snapshot_clustering'] is False:
+ network = snapshot_clustering(
+ network, how='daily', clusters=args['snapshot_clustering'])
+ extra_functionality = daily_bounds # daily_bounds or other constraint
+
+ # set Branch capacity factor for lines and transformer
if args['branch_capacity_factor']:
network.lines.s_nom = network.lines.s_nom * \
args['branch_capacity_factor']
@@ -406,11 +456,37 @@ def etrago(args):
if args['load_shedding']:
load_shedding(network)
- # snapshot clustering
- if not args['snapshot_clustering'] is False:
- network = snapshot_clustering(
- network, how='daily', clusters=args['snapshot_clustering'])
- extra_functionality = daily_bounds # daily_bounds or other constraint
+ # ehv network clustering
+ if args['network_clustering_ehv']:
+ network.generators.control = "PV"
+ busmap = busmap_from_psql(network, session, scn_name=args['scn_name'])
+ network = cluster_on_extra_high_voltage(
+ network, busmap, with_time=True)
+
+ # k-mean clustering
+ if not args['network_clustering_kmeans'] == False:
+ clustering = kmean_clustering(network,
+ n_clusters=args['network_clustering_kmeans'],
+ load_cluster=args['load_cluster'],
+ line_length_factor= 1,
+ remove_stubs=False,
+ use_reduced_coordinates=False,
+ bus_weight_tocsv=None,
+ bus_weight_fromcsv=None,
+ n_init=100,
+ max_iter=1000,
+ tol=1e-8,
+ n_jobs=-2)
+ disaggregated_network = (
+ network.copy() if args.get('disaggregation') else None)
+ network = clustering.network.copy()
+
+ if args['ramp_limits']:
+ ramp_limits(network)
+
+ # preselection of extendable lines
+ if 'network_preselection' in args['extendable']:
+ extension_preselection(network, args, 'snapshot_clustering', 2)
# parallisation
if args['parallelisation']:
@@ -422,6 +498,7 @@ def etrago(args):
solver_name=args['solver'],
solver_options=args['solver_options'],
extra_functionality=extra_functionality)
+
# start linear optimal powerflow calculations
elif args['method'] == 'lopf':
x = time.time()
@@ -441,9 +518,17 @@ def etrago(args):
# calc_line_losses(network)
if args['pf_post_lopf']:
- pf_post_lopf(network, scenario)
+ x = time.time()
+ pf_solution = pf_post_lopf(network,
+ args,
+ extra_functionality,
+ add_foreign_lopf=True)
+ y = time.time()
+ z = (y - x) / 60
+ print("Time for PF [min]:", round(z, 2))
calc_line_losses(network)
-
+ network = distribute_q(network, allocation = 'p_nom')
+
# provide storage installation costs
if sum(network.storage_units.p_nom_opt) != 0:
installed_storages = \
@@ -457,41 +542,89 @@ def etrago(args):
storage_costs,
2))
+ if clustering:
+ disagg = args.get('disaggregation')
+ skip = () if args['pf_post_lopf'] else ('q',)
+ t = time.time()
+ if disagg:
+ if disagg == 'mini':
+ disaggregation = MiniSolverDisaggregation(
+ disaggregated_network,
+ network,
+ clustering,
+ skip=skip)
+ elif disagg == 'uniform':
+ disaggregation = UniformDisaggregation(disaggregated_network,
+ network,
+ clustering,
+ skip=skip)
+
+ else:
+ raise Exception('Invalid disaggregation command: ' + disagg)
+
+ disaggregation.execute(scenario, solver=args['solver'])
+ # temporal bug fix for solar generator which ar during night time
+ # nan instead of 0
+ disaggregated_network.generators_t.p.fillna(0, inplace=True)
+ disaggregated_network.generators_t.q.fillna(0, inplace=True)
+
+ disaggregated_network.results = network.results
+ print("Time for overall desaggregation [min]: {:.2}"
+ .format((time.time() - t) / 60))
+
# write lpfile to path
if not args['lpfile'] is False:
network.model.write(
args['lpfile'], io_options={
'symbolic_solver_labels': True})
+
# write PyPSA results back to database
if args['export']:
username = str(conn.url).split('//')[1].split(':')[0]
args['user_name'] = username
safe_results = False # default is False.
- # If it is set to 'True' the result set will be safed
+ # If it is set to 'True' the result set will be saved
# to the versioned grid schema eventually apart from
# being saved to the model_draft.
# ONLY set to True if you know what you are doing.
results_to_oedb(
session,
network,
- args,
+ dict([("disaggregated_results", False)] + list(args.items())),
grid='hv',
safe_results=safe_results)
+ if disaggregated_network:
+ results_to_oedb(
+ session,
+ disaggregated_network,
+ dict([("disaggregated_results", True)] + list(args.items())),
+ grid='hv',
+ safe_results=safe_results)
# write PyPSA results to csv to path
if not args['results'] is False:
- results_to_csv(network, args['results'])
+ if not args['pf_post_lopf']:
+ results_to_csv(network, args)
+ else:
+ results_to_csv(network, args,pf_solution = pf_solution)
+
+ if disaggregated_network:
+ results_to_csv(
+ disaggregated_network,
+ {k: os.path.join(v, 'disaggregated')
+ if k == 'results' else v
+ for k, v in args.items()})
# close session
# session.close()
- return network
+ return network, disaggregated_network
if __name__ == '__main__':
# execute etrago function
print(datetime.datetime.now())
- network = etrago(args)
+ network, disaggregated_network = etrago(args)
print(datetime.datetime.now())
# plots
# make a line loading plot
diff --git a/etrago/args.json b/etrago/args.json
new file mode 100644
index 00000000..14c4182e
--- /dev/null
+++ b/etrago/args.json
@@ -0,0 +1,32 @@
+{
+ "db": "oedb",
+ "gridversion": "v0.4.5",
+ "method": "lopf",
+ "pf_post_lopf": true,
+ "start_snapshot": 1,
+ "end_snapshot" : 2,
+ "solver": "gurobi",
+ "solver_options":{},
+ "scn_name": "eGo 100",
+ "scn_extension": null,
+ "scn_decommissioning": null,
+ "lpfile": false,
+ "results": false,
+ "export": false,
+ "extendable": "['network', 'storages']",
+ "generator_noise": 789456,
+ "minimize_loading": false,
+ "ramp_limits": false,
+ "network_clustering_kmeans": 30,
+ "load_cluster": false,
+ "network_clustering_ehv": false,
+ "disaggregation": null,
+ "snapshot_clustering": false,
+ "parallelisation": false,
+ "skip_snapshots": false,
+ "line_grouping": false,
+ "branch_capacity_factor": 0.7,
+ "load_shedding": false,
+ "foreign_lines" :{"carrier": "AC", "capacity": "osmTGmod"},
+ "comments": ""
+ }
diff --git a/etrago/cluster/disaggregation.py b/etrago/cluster/disaggregation.py
new file mode 100644
index 00000000..4bdd96d3
--- /dev/null
+++ b/etrago/cluster/disaggregation.py
@@ -0,0 +1,533 @@
+from functools import reduce
+from itertools import count, product
+from operator import methodcaller as mc, mul as multiply
+import cProfile
+import random
+import string
+import time
+
+import pandas as pd
+from pyomo.environ import Constraint
+from pypsa import Network
+
+
+class Disaggregation:
+ def __init__(self, original_network, clustered_network, clustering,
+ skip=()):
+ """
+ :param original_network: Initial (unclustered) network structure
+ :param clustered_network: Clustered network used for the optimization
+ :param clustering: The clustering object as returned by
+ `pypsa.networkclustering.get_clustering_from_busmap`
+ """
+ self.original_network = original_network
+ self.clustered_network = clustered_network
+ self.clustering = clustering
+
+ self.buses = pd.merge(original_network.buses,
+ clustering.busmap.to_frame(name='cluster'),
+ left_index=True, right_index=True)
+
+ self.skip = skip
+
+ self.idx_prefix = '_'
+
+ def add_constraints(self, cluster, extra_functionality=None):
+ """
+ Dummy function that allows the extension of `extra_functionalites` by
+ custom conditions.
+
+ :param cluster: Index of the cluster to disaggregate
+ :param extra_functionality: extra_functionalities to extend
+ :return: unaltered `extra_functionalites`
+ """
+ return extra_functionality
+
+ def reindex_with_prefix(self, dataframe, prefix=None):
+ if prefix is None:
+ prefix = self.idx_prefix
+ dataframe.set_index(
+ dataframe.index.map(lambda x: self.idx_prefix + x),
+ inplace=True)
+
+
+
+ def construct_partial_network(self, cluster, scenario):
+ """
+ Compute the partial network that has been merged into a single cluster.
+ The resulting network retains the external cluster buses that
+ share some line with the cluster identified by `cluster`.
+ These external buses will be prefixed by self.id_prefix in order to
+ prevent name clashes with buses in the disaggregation
+
+ :param cluster: Index of the cluster to disaggregate
+ :return: Tuple of (partial_network, external_buses) where
+ `partial_network` is the result of the partial decomposition
+ and `external_buses` represent clusters adjacent to `cluster` that may
+ be influenced by calculations done on the partial network.
+ """
+
+ #Create an empty network
+ partial_network = Network()
+
+ # find all lines that have at least one bus inside the cluster
+ busflags = (self.buses['cluster'] == cluster)
+
+ def is_bus_in_cluster(conn):
+ return busflags[conn]
+
+ # Copy configurations to new network
+ partial_network.snapshots = self.original_network.snapshots
+ partial_network.snapshot_weightings = (self.original_network
+ .snapshot_weightings)
+ partial_network.carriers = self.original_network.carriers
+
+ # Collect all connectors that have some node inside the cluster
+
+ external_buses = pd.DataFrame()
+
+ line_types = ['lines', 'links', 'transformers']
+ for line_type in line_types:
+ # Copy all lines that reside entirely inside the cluster ...
+ setattr(partial_network, line_type,
+ filter_internal_connector(
+ getattr(self.original_network, line_type),
+ is_bus_in_cluster))
+
+ # ... and their time series
+ # TODO: These are all time series, not just the ones from lines
+ # residing entirely in side the cluster.
+ # Is this a problem?
+ setattr(partial_network, line_type + '_t',
+ getattr(self.original_network, line_type + '_t'))
+
+ # Copy all lines whose `bus0` lies within the cluster
+ left_external_connectors = filter_left_external_connector(
+ getattr(self.original_network, line_type),
+ is_bus_in_cluster)
+
+ if not left_external_connectors.empty:
+ f = lambda x: self.idx_prefix + self.clustering.busmap.loc[x]
+ ca_option = pd.get_option('mode.chained_assignment')
+ pd.set_option('mode.chained_assignment', None)
+ left_external_connectors.loc[:, 'bus0'] = (
+ left_external_connectors.loc[:, 'bus0'].apply(f))
+ pd.set_option('mode.chained_assignment', ca_option)
+ external_buses = pd.concat((external_buses,
+ left_external_connectors.bus0))
+
+ # Copy all lines whose `bus1` lies within the cluster
+ right_external_connectors = filter_right_external_connector(
+ getattr(self.original_network, line_type),
+ is_bus_in_cluster)
+ if not right_external_connectors.empty:
+ f = lambda x: self.idx_prefix + self.clustering.busmap.loc[x]
+ ca_option = pd.get_option('mode.chained_assignment')
+ pd.set_option('mode.chained_assignment', None)
+ right_external_connectors.loc[:, 'bus1'] = (
+ right_external_connectors.loc[:, 'bus1'].apply(f))
+ pd.set_option('mode.chained_assignment', ca_option)
+ external_buses = pd.concat((external_buses,
+ right_external_connectors.bus1))
+
+ # Collect all buses that are contained in or somehow connected to the
+ # cluster
+
+ buses_in_lines = self.buses[busflags].index
+
+ bus_types = ['loads', 'generators', 'stores', 'storage_units',
+ 'shunt_impedances']
+
+ # Copy all values that are part of the cluster
+ partial_network.buses = self.original_network.buses[
+ self.original_network.buses.index.isin(buses_in_lines)]
+
+ # Collect all buses that are external, but connected to the cluster ...
+ externals_to_insert = self.clustered_network.buses[
+ self.clustered_network.buses.index.isin(
+ map(lambda x: x[0][len(self.idx_prefix):],
+ external_buses.values))]
+
+ # ... prefix them to avoid name clashes with buses from the original
+ # network ...
+ self.reindex_with_prefix(externals_to_insert)
+
+ # .. and insert them as well as their time series
+ partial_network.buses = (partial_network.buses
+ .append(externals_to_insert))
+ partial_network.buses_t = self.original_network.buses_t
+
+ # TODO: Rename `bustype` to on_bus_type
+ for bustype in bus_types:
+ # Copy loads, generators, ... from original network to network copy
+ setattr(partial_network, bustype,
+ filter_buses(getattr(self.original_network, bustype),
+ buses_in_lines))
+
+ # Collect on-bus components from external, connected clusters
+ buses_to_insert = filter_buses(
+ getattr(self.clustered_network, bustype),
+ map(lambda x: x[0][len(self.idx_prefix):],
+ external_buses.values))
+
+ # Prefix their external bindings
+ buses_to_insert.loc[:, 'bus'] = (
+ self.idx_prefix +
+ buses_to_insert.loc[:, 'bus'])
+
+ setattr(partial_network, bustype,
+ getattr(partial_network, bustype).append(buses_to_insert))
+
+ # Also copy their time series
+ setattr(partial_network,
+ bustype + '_t',
+ getattr(self.original_network, bustype + '_t'))
+ # Note: The code above copies more than necessary, because it
+ # copies every time series for `bustype` from the original
+ # network and not only the subset belonging to the partial
+ # network. The commented code below tries to filter the time
+ # series accordingly, but there must be bug somewhere because
+ # using it, the time series in the clusters and sums of the
+ # time series after disaggregation don't match up.
+ """
+ series = getattr(self.original_network, bustype + '_t')
+ partial_series = type(series)()
+ for s in series:
+ partial_series[s] = series[s].loc[
+ :,
+ getattr(partial_network, bustype)
+ .index.intersection(series[s].columns)]
+ setattr(partial_network, bustype + '_t', partial_series)
+ """
+
+ # Just a simple sanity check
+ # TODO: Remove when sure that disaggregation will not go insane anymore
+ for line_type in line_types:
+ assert (getattr(partial_network, line_type).bus0.isin(
+ partial_network.buses.index).all())
+ assert (getattr(partial_network, line_type).bus1.isin(
+ partial_network.buses.index).all())
+
+ return partial_network, external_buses
+
+ def execute(self, scenario, solver=None):
+ self.solve(scenario, solver)
+
+ def solve(self, scenario, solver):
+ """
+ Decompose each cluster into separate units and try to optimize them
+ separately
+ :param scenario:
+ :param solver: Solver that may be used to optimize partial networks
+ """
+ clusters = set(self.clustering.busmap.values)
+ n = len(clusters)
+ self.stats = {'clusters': pd.DataFrame(
+ index=sorted(clusters),
+ columns=["decompose", "spread", "transfer"])}
+ profile = cProfile.Profile()
+ for i, cluster in enumerate(sorted(clusters)):
+ print('---')
+ print('Decompose cluster %s (%d/%d)' % (cluster, i+1, n))
+ profile.enable()
+ t = time.time()
+ partial_network, externals = self.construct_partial_network(
+ cluster,
+ scenario)
+ profile.disable()
+ self.stats['clusters'].loc[cluster, 'decompose'] = time.time() - t
+ print('Decomposed in ',
+ self.stats['clusters'].loc[cluster, 'decompose'])
+ t = time.time()
+ profile.enable()
+ self.solve_partial_network(cluster, partial_network, scenario,
+ solver)
+ profile.disable()
+ self.stats['clusters'].loc[cluster, 'spread'] = time.time() - t
+ print('Result distributed in ',
+ self.stats['clusters'].loc[cluster, 'spread'])
+ profile.enable()
+ t = time.time()
+ self.transfer_results(partial_network, externals)
+ profile.disable()
+ self.stats['clusters'].loc[cluster, 'transfer'] = time.time() - t
+ print('Results transferred in ',
+ self.stats['clusters'].loc[cluster, 'transfer'])
+
+ profile.enable()
+ t = time.time()
+ print('---')
+ fs = (mc("sum"), mc("sum"))
+ for bt, ts in (
+ ('generators', {'p': fs, 'q': fs}),
+ ('storage_units', {'p': fs, 'state_of_charge': fs, 'q': fs})):
+ print("Attribute sums, {}, clustered - disaggregated:" .format(bt))
+ cnb = getattr(self.clustered_network, bt)
+ onb = getattr(self.original_network, bt)
+ print("{:>{}}: {}".format('p_nom_opt', 4 + len('state_of_charge'),
+ reduce(lambda x, f: f(x), fs[:-1], cnb['p_nom_opt'])
+ -
+ reduce(lambda x, f: f(x), fs[:-1], onb['p_nom_opt'])))
+
+ print("Series sums, {}, clustered - disaggregated:" .format(bt))
+ cnb = getattr(self.clustered_network, bt + '_t')
+ onb = getattr(self.original_network, bt + '_t')
+ for s in ts:
+ print("{:>{}}: {}".format(s, 4 + len('state_of_charge'),
+ reduce(lambda x, f: f(x), ts[s], cnb[s])
+ -
+ reduce(lambda x, f: f(x), ts[s], onb[s])))
+ profile.disable()
+ self.stats['check'] = time.time() - t
+ print('Checks computed in ', self.stats['check'])
+
+ # profile.print_stats(sort='cumtime')
+
+ def transfer_results(self, partial_network, externals,
+ bustypes=['loads', 'generators', 'stores',
+ 'storage_units', 'shunt_impedances'],
+ series=None):
+ for bustype in bustypes:
+ orig_buses = getattr(self.original_network, bustype + '_t')
+ part_buses = getattr(partial_network, bustype + '_t')
+ for key in (orig_buses.keys()
+ if series is None
+ else (k for k in orig_buses.keys()
+ if k in series.get(bustype, {}))):
+ for snap in partial_network.snapshots:
+ orig_buses[key].loc[snap].update(part_buses[key].loc[snap])
+
+
+ def solve_partial_network(self, cluster, partial_network, scenario,
+ solver=None):
+ extras = self.add_constraints(cluster)
+ partial_network.lopf(scenario.timeindex,
+ solver_name=solver,
+ extra_functionality=extras)
+
+class MiniSolverDisaggregation(Disaggregation):
+ def add_constraints(self, cluster, extra_functionality=None):
+ if extra_functionality is None:
+ extra_functionality = lambda network, snapshots: None
+ extra_functionality = self._validate_disaggregation_generators(
+ cluster,
+ extra_functionality)
+ return extra_functionality
+
+ def _validate_disaggregation_generators(self, cluster, f):
+ def extra_functionality(network, snapshots):
+ f(network, snapshots)
+ generators = self.original_network.generators.assign(
+ bus=lambda df: df.bus.map(self.clustering.busmap))
+ grouper = [generators.carrier]
+ def construct_constraint(model, snapshot, carrier):
+ # TODO: Optimize
+
+ generator_p = [model.generator_p[(x, snapshot)]
+ for x in generators.loc[
+ (generators.bus == cluster) &
+ (generators.carrier == carrier)].index]
+ if not generator_p:
+ return Constraint.Feasible
+ sum_generator_p = sum(generator_p)
+
+ cluster_generators = self.clustered_network.generators[
+ (self.clustered_network.generators.bus == cluster) &
+ (self.clustered_network.generators.carrier == carrier)]
+ sum_clustered_p = sum(
+ self.clustered_network.generators_t['p'].loc[snapshot, c]
+ for c in cluster_generators.index)
+ return sum_generator_p == sum_clustered_p
+
+ # TODO: Generate a better name
+ network.model.validate_generators = Constraint(
+ list(snapshots),
+ set(generators.carrier),
+ rule=construct_constraint)
+ return extra_functionality
+
+ # TODO: This function is never used.
+ # Is this a problem?
+ def _validate_disaggregation_buses(self, cluster, f):
+ def extra_functionality(network, snapshots):
+ f(network, snapshots)
+
+ for bustype, bustype_pypsa, suffixes in [
+ ('storage', 'storage_units', ['_dispatch', '_spill', '_store']),
+ ('store', 'stores',[''])]:
+ generators = getattr(self.original_network, bustype_pypsa).assign(
+ bus=lambda df: df.bus.map(self.clustering.busmap))
+ for suffix in suffixes:
+ def construct_constraint(model, snapshot):
+ # TODO: Optimize
+ buses_p = [getattr(model,bustype + '_p' + suffix)[(x, snapshot)] for x in
+ generators.loc[(generators.bus == cluster)].index]
+ if not buses_p:
+ return Constraint.Feasible
+ sum_bus_p = sum(buses_p)
+ cluster_buses = getattr(self.clustered_network, bustype_pypsa)[
+ (getattr(self.clustered_network, bustype_pypsa).bus == cluster)]
+ sum_clustered_p = sum(
+ getattr(self.clustered_network, bustype_pypsa + '_t')['p'].loc[snapshot,c] for
+ c in cluster_buses.index)
+ return sum_bus_p == sum_clustered_p
+
+ # TODO: Generate a better name
+ network.model.add_component('validate_' + bustype + suffix,Constraint(list(snapshots), rule=construct_constraint))
+ return extra_functionality
+
+class UniformDisaggregation(Disaggregation):
+ def solve_partial_network(self, cluster, partial_network, scenario,
+ solver=None):
+ bustypes = {
+ 'generators': {
+ 'group_by': ('carrier',),
+ 'series': ('p', 'q')},
+ 'storage_units': {
+ 'group_by': ('carrier', 'max_hours'),
+ 'series': ('p', 'state_of_charge', 'q')}}
+ weights = {'p': ('p_nom_opt', 'p_max_pu'),
+ 'q': (('p_nom_opt',)
+ if (getattr(self.clustered_network, 'allocation', None)
+ ==
+ 'p_nom')
+ else ('p_nom_opt', 'p_max_pu')),
+ 'state_of_charge': ('p_nom_opt',)}
+ filters = {'q': lambda o: o.control == "PV"}
+ for bustype in bustypes:
+ pn_t = getattr(partial_network, bustype + '_t')
+ cl_t = getattr(self.clustered_network, bustype + '_t')
+ pn_buses = getattr(partial_network, bustype)
+ cl_buses = getattr(self.clustered_network, bustype)
+ groups = product(*
+ [ [ {'key': key, 'value': value}
+ for value in set(pn_buses.loc[:, key])]
+ for key in bustypes[bustype]['group_by']])
+ for group in groups:
+ clb = cl_buses[cl_buses.bus == cluster]
+ query = " & ".join(["({key} == {value!r})".format(**axis)
+ for axis in group])
+ clb = clb.query(query)
+ if len(clb) == 0:
+ continue
+ assert len(clb) == 1, (
+ "Cluster {} has {} buses for group {}.\n"
+ .format(cluster, len(clb), group) +
+ "Should be exactly one.")
+ # Remove buses not belonging to the partial network
+ pnb = pn_buses.iloc[
+ [i for i, row in enumerate(pn_buses.itertuples())
+ if not row.bus.startswith(self.idx_prefix) ]]
+ pnb = pnb.query(query)
+ assert not pnb.empty, (
+ "Cluster has a bus for:" +
+ "\n ".join(["{key}: {value!r}".format(**axis)
+ for axis in group]) +
+ "\nbut no matching buses in its corresponding " +
+ "partial network.")
+
+ if not (pnb.loc[:, 'p_nom_extendable'].all() or
+ not pnb.loc[:, 'p_nom_extendable'].any()):
+ raise NotImplemented(
+ "The `'p_nom_extendable'` flag for buses in the" +
+ " partial network with:" +
+ "\n ".join(["{key}: {value!r}".format(**axis)
+ for axis in group]) +
+ "\ndoesn't have the same value." +
+ "\nThis is not supported.")
+ else:
+ assert (pnb.loc[:, 'p_nom_extendable'] ==
+ clb.iloc[0].at['p_nom_extendable']).all(), (
+ "The `'p_nom_extendable'` flag for the current " +
+ "cluster's bus does not have the same value " +
+ "it has on the buses of it's partial network.")
+
+ if clb.iloc[0].at['p_nom_extendable']:
+ # That means, `p_nom` got computed via optimization and we
+ # have to distribute it into the subnetwork first.
+ pnb_p_nom_max = pnb.loc[:, 'p_nom_max']
+ p_nom_max_global = pnb_p_nom_max.sum(axis='index')
+ pnb.loc[:, 'p_nom_opt'] = (
+ clb.iloc[0].at['p_nom_opt'] *
+ pnb_p_nom_max /
+ p_nom_max_global)
+ getattr(self.original_network,
+ bustype).loc[
+ pnb.index,
+ 'p_nom_opt'] = pnb.loc[:, 'p_nom_opt']
+ pnb.loc[:, 'p_nom'] = pnb.loc[:, 'p_nom_opt']
+ else:
+ # That means 'p_nom_opt' didn't get computed and is
+ # potentially not present in the dataframe. But we want to
+ # always use 'p_nom_opt' in the remaining code, so save a
+ # view of the computed 'p_nom' values under 'p_nom_opt'.
+ pnb.loc[:, 'p_nom_opt'] = pnb.loc[:, 'p_nom']
+
+ # This probably shouldn't be here, but rather in
+ # `transfer_results`, but it's easier to do it this way right
+ # now.
+ getattr(self.original_network, bustype).loc[
+ pnb.index,
+ 'p_nom_opt'] = pnb.loc[:, 'p_nom_opt']
+ timed = lambda key, series=set(s
+ for s in cl_t
+ if not cl_t[s].empty
+ if not pn_t[s].columns.intersection(pnb.index).empty
+ ): key in series
+
+ for s in bustypes[bustype]['series']:
+ if s in self.skip:
+ continue
+ filtered = pnb.loc[filters.get(s, slice(None))]
+ clt = cl_t[s].loc[:, next(clb.itertuples()).Index]
+ weight = reduce(multiply,
+ (filtered.loc[:, key]
+ if not timed(key)
+ else pn_t[key].loc[:, filtered.index]
+ for key in weights[s]),
+ 1)
+ loc = ((slice(None),)
+ if any(timed(w) for w in weights[s])
+ else ())
+ ws = weight.sum(axis=len(loc))
+ for bus_id in filtered.index:
+ values = clt * weight.loc[loc + (bus_id,)] / ws
+ pn_t[s].insert(len(pn_t[s].columns), bus_id, values)
+
+
+ def transfer_results(self, *args, **kwargs):
+ kwargs['bustypes'] = ['generators', 'storage_units']
+ kwargs['series'] = {'generators': {'p'},
+ 'storage_units': {'p', 'state_of_charge'}}
+ return super().transfer_results(*args, **kwargs)
+
+
+def swap_series(s):
+ return pd.Series(s.index.values, index=s)
+
+
+def filter_internal_connector(conn, is_bus_in_cluster):
+ return conn[conn.bus0.apply(is_bus_in_cluster)
+ & conn.bus1.apply(is_bus_in_cluster)]
+
+
+def filter_left_external_connector(conn, is_bus_in_cluster):
+ return conn[~ conn.loc[:, 'bus0'].apply(is_bus_in_cluster)
+ & conn.loc[:, 'bus1'].apply(is_bus_in_cluster)]
+
+
+def filter_right_external_connector(conn, is_bus_in_cluster):
+ return conn[conn.bus0.apply(is_bus_in_cluster)
+ & ~conn.bus1.apply(is_bus_in_cluster)]
+
+
+def filter_buses(bus, buses):
+ return bus[bus.bus.isin(buses)]
+
+
+def filter_on_buses(connecitve, buses):
+ return connecitve[connecitve.bus.isin(buses)]
+
+
+def update_constraints(network, externals):
+ pass
diff --git a/etrago/cluster/networkclustering.py b/etrago/cluster/networkclustering.py
index ec8842b4..d479955b 100644
--- a/etrago/cluster/networkclustering.py
+++ b/etrago/cluster/networkclustering.py
@@ -18,7 +18,7 @@
# along with this program. If not, see .
# File description for read-the-docs
-""" Networkclustering.py defines the methods to cluster power grid networks
+""" Networkclustering.py defines the methods to cluster power grid networks
spatially for applications within the tool eTraGo."""
import os
@@ -54,6 +54,8 @@
def _leading(busmap, df):
+ """
+ """
def leader(x):
ix = busmap[x.index[0]]
return df.loc[ix, x.name]
@@ -386,7 +388,9 @@ def fetch():
def kmean_clustering(network, n_clusters=10, load_cluster=False,
line_length_factor=1.25,
remove_stubs=False, use_reduced_coordinates=False,
- bus_weight_tocsv=None, bus_weight_fromcsv=None):
+ bus_weight_tocsv=None, bus_weight_fromcsv=None,
+ n_init=10, max_iter=300, tol=1e-4,
+ n_jobs=1):
""" Main function of the k-mean clustering approach. Maps an original
network to a new one with adjustable number of nodes and new coordinates.
@@ -400,7 +404,7 @@ def kmean_clustering(network, n_clusters=10, load_cluster=False,
load_cluster : boolean
Loads cluster coordinates from a former calculation.
-
+
line_length_factor : float
Factor to multiply the crow-flies distance between new buses in order
to get new line lengths.
@@ -425,6 +429,8 @@ def kmean_clustering(network, n_clusters=10, load_cluster=False,
Container for all network components.
"""
def weighting_for_scenario(x, save=None):
+ """
+ """
b_i = x.index
g = normed(gen.reindex(b_i, fill_value=0))
l = normed(load.reindex(b_i, fill_value=0))
@@ -445,6 +451,8 @@ def normed(x):
# prepare k-mean
# k-means clustering (first try)
network.generators.control = "PV"
+ network.storage_units.control[network.storage_units.carrier == \
+ 'extendable_storage'] = "PV"
network.buses['v_nom'] = 380.
# problem our lines have no v_nom. this is implicitly defined by the
# connected buses:
@@ -462,16 +470,16 @@ def normed(x):
network.transformers.bus1.map(network.buses.v_nom)], axis=1)
network.import_components_from_dataframe(
- network.transformers.loc[:, ['bus0', 'bus1', 'x', 's_nom']]
+ network.transformers.loc[:, ['bus0', 'bus1', 'x', 's_nom', 'capital_cost']]
.assign(x=network.transformers.x * (380. /
- transformer_voltages.max(axis=1))**2)
+ transformer_voltages.max(axis=1))**2, length = 1)
.set_index('T' + trafo_index),
'Line')
network.transformers.drop(trafo_index, inplace=True)
for attr in network.transformers_t:
network.transformers_t[attr] = network.transformers_t[attr]\
- .reindex(columns=[])
+ .reindex(columns=[])
# remove stubs
if remove_stubs:
@@ -501,19 +509,19 @@ def normed(x):
non_conv_types = {
'biomass',
'wind_onshore',
- 'wind_offshore',
+ 'wind_offshore',
'solar',
'geothermal',
'load shedding',
'extendable_storage'}
# Attention: network.generators.carrier.unique()
- gen = (network.generators.loc[(network.generators.carrier\
- .isin(non_conv_types) == False)]\
- .groupby('bus').p_nom.sum()\
+ gen = (network.generators.loc[(network.generators.carrier
+ .isin(non_conv_types) == False)]
+ .groupby('bus').p_nom.sum()
.reindex(network.buses.index, fill_value=0.) +
- network.storage_units\
- .loc[(network.storage_units.carrier\
- .isin(non_conv_types) == False)]
+ network.storage_units
+ .loc[(network.storage_units.carrier
+ .isin(non_conv_types) == False)]
.groupby('bus').p_nom.sum()
.reindex(network.buses.index, fill_value=0.))
@@ -538,7 +546,10 @@ def normed(x):
bus_weightings=pd.Series(weight),
n_clusters=n_clusters,
load_cluster=load_cluster,
- n_jobs=-1)
+ n_init=n_init,
+ max_iter=max_iter,
+ tol=tol,
+ n_jobs=n_jobs)
# ToDo change function in order to use bus_strategies or similar
network.generators['weight'] = network.generators['p_nom']
@@ -549,6 +560,5 @@ def normed(x):
busmap,
aggregate_generators_weighted=True,
aggregate_one_ports=aggregate_one_ports)
- network = clustering.network
- return network
+ return clustering
diff --git a/etrago/cluster/snapshot.py b/etrago/cluster/snapshot.py
index e2987e6d..32a6e237 100644
--- a/etrago/cluster/snapshot.py
+++ b/etrago/cluster/snapshot.py
@@ -21,7 +21,7 @@
""" This module contains functions for calculating representative days/weeks
based on a pyPSA network object. It is designed to be used for the `lopf`
method. Essentially the tsam package
-( https://github.com/FZJ-IEK3-VSA/tsam ), which is developed by
+( https://github.com/FZJ-IEK3-VSA/tsam ), which is developed by
Leander Kotzur is used.
Remaining questions/tasks:
@@ -31,8 +31,10 @@
"""
import pandas as pd
-import pyomo.environ as po
-import tsam.timeseriesaggregation as tsam
+import os
+if 'READTHEDOCS' not in os.environ:
+ import pyomo.environ as po
+ import tsam.timeseriesaggregation as tsam
__copyright__ = ("Flensburg University of Applied Sciences, "
"Europa-Universität Flensburg, "
@@ -42,6 +44,8 @@
def snapshot_clustering(network, how='daily', clusters=10):
+ """
+ """
network = run(network=network.copy(), n_clusters=clusters,
how=how, normed=False)
@@ -106,13 +110,11 @@ def run(network, n_clusters=None, how='daily',
normed=False):
"""
"""
- # reduce storage costs due to clusters
- network.cluster = True
# calculate clusters
tsam_ts, cluster_weights, dates, hours = tsam_cluster(
- prepare_pypsa_timeseries(network), typical_periods=n_clusters,
- how='daily')
+ prepare_pypsa_timeseries(network), typical_periods=n_clusters,
+ how='daily')
update_data_frames(network, cluster_weights, dates, hours)
@@ -122,16 +124,19 @@ def run(network, n_clusters=None, how='daily',
def prepare_pypsa_timeseries(network, normed=False):
"""
"""
-
if normed:
normed_loads = network.loads_t.p_set / network.loads_t.p_set.max()
+ normed_loads.columns = 'L' + normed_loads.columns
normed_renewables = network.generators_t.p_max_pu
+ normed_renewables.columns = 'G' + normed_renewables.columns
df = pd.concat([normed_renewables,
normed_loads], axis=1)
else:
loads = network.loads_t.p_set
+ loads.columns = 'L' + loads.columns
renewables = network.generators_t.p_set
+ renewables.columns = 'G' + renewables.columns
df = pd.concat([renewables, loads], axis=1)
return df
@@ -177,24 +182,23 @@ def update_data_frames(network, cluster_weights, dates, hours):
def daily_bounds(network, snapshots):
""" This will bound the storage level to 0.5 max_level every 24th hour.
"""
- if network.cluster:
- sus = network.storage_units
- # take every first hour of the clustered days
- network.model.period_starts = network.snapshot_weightings.index[0::24]
+ sus = network.storage_units
+ # take every first hour of the clustered days
+ network.model.period_starts = network.snapshot_weightings.index[0::24]
- network.model.storages = sus.index
+ network.model.storages = sus.index
- def day_rule(m, s, p):
- """
- Sets the soc of the every first hour to the soc of the last hour
- of the day (i.e. + 23 hours)
- """
- return (
- m.state_of_charge[s, p] ==
- m.state_of_charge[s, p + pd.Timedelta(hours=23)])
+ def day_rule(m, s, p):
+ """
+ Sets the soc of the every first hour to the soc of the last hour
+ of the day (i.e. + 23 hours)
+ """
+ return (
+ m.state_of_charge[s, p] ==
+ m.state_of_charge[s, p + pd.Timedelta(hours=23)])
- network.model.period_bound = po.Constraint(
+ network.model.period_bound = po.Constraint(
network.model.storages, network.model.period_starts, rule=day_rule)
diff --git a/etrago/tools/extendable.py b/etrago/tools/extendable.py
index e05b2173..862c5a40 100644
--- a/etrago/tools/extendable.py
+++ b/etrago/tools/extendable.py
@@ -21,7 +21,18 @@
"""
Extendable.py defines function to set PyPSA-components extendable.
"""
-from etrago.tools.utilities import set_line_costs, set_trafo_costs
+from etrago.tools.utilities import (
+ set_line_costs,
+ set_trafo_costs,
+ convert_capital_costs,
+ find_snapshots,
+ buses_by_country)
+
+from etrago.cluster.snapshot import snapshot_clustering
+
+import numpy as np
+
+import time
__copyright__ = ("Flensburg University of Applied Sciences, "
"Europa-Universität Flensburg, "
@@ -31,9 +42,9 @@
__author__ = "ulfmueller, s3pp, wolfbunke, mariusves, lukasol"
-def extendable(network, extendable, overlay_scn_name=None):
+def extendable(network, args):
- if 'network' in extendable:
+ if 'network' in args['extendable']:
network.lines.s_nom_extendable = True
network.lines.s_nom_min = network.lines.s_nom
network.lines.s_nom_max = float("inf")
@@ -50,14 +61,89 @@ def extendable(network, extendable, overlay_scn_name=None):
network = set_line_costs(network)
network = set_trafo_costs(network)
+
+ if 'german_network' in args['extendable']:
+ buses = network.buses[~network.buses.index.isin(
+ buses_by_country(network).index)]
+ network.lines.loc[(network.lines.bus0.isin(buses.index)) &
+ (network.lines.bus1.isin(buses.index)),
+ 's_nom_extendable'] = True
+ network.lines.loc[(network.lines.bus0.isin(buses.index)) &
+ (network.lines.bus1.isin(buses.index)),
+ 's_nom_min'] = network.lines.s_nom
+ network.lines.loc[(network.lines.bus0.isin(buses.index)) &
+ (network.lines.bus1.isin(buses.index)),
+ 's_nom_max'] = float("inf")
+
+ if not network.transformers.empty:
+ network.transformers.loc[network.transformers.bus0.isin(
+ buses.index),'s_nom_extendable'] = True
+ network.transformers.loc[network.transformers.bus0.isin(
+ buses.index),'s_nom_min'] = network.transformers.s_nom
+ network.transformers.loc[network.transformers.bus0.isin(
+ buses.index),'s_nom_max'] = float("inf")
+
+ if not network.links.empty:
+ network.links.loc[(network.links.bus0.isin(buses.index)) &
+ (network.links.bus1.isin(buses.index)),
+ 'p_nom_extendable'] = True
+ network.links.loc[(network.links.bus0.isin(buses.index)) &
+ (network.links.bus1.isin(buses.index)),
+ 'p_nom_min'] = network.links.p_nom
+ network.links.loc[(network.links.bus0.isin(buses.index)) &
+ (network.links.bus1.isin(buses.index)),
+ 'p_nom_max'] = float("inf")
+
+ network = set_line_costs(network)
+ network = set_trafo_costs(network)
+
+
+ if 'foreign_network' in args['extendable']:
+ buses = network.buses[network.buses.index.isin(
+ buses_by_country(network).index)]
+ network.lines.loc[network.lines.bus0.isin(buses.index) |
+ network.lines.bus1.isin(buses.index) ,
+ 's_nom_extendable'] = True
+ network.lines.loc[network.lines.bus0.isin(buses.index) |
+ network.lines.bus1.isin(buses.index),
+ 's_nom_min'] = network.lines.s_nom
+ network.lines.loc[network.lines.bus0.isin(buses.index) |
+ network.lines.bus1.isin(buses.index),
+ 's_nom_max'] = float("inf")
+
+ if not network.transformers.empty:
+ network.transformers.loc[network.transformers.bus0.isin(
+ buses.index) | network.transformers.bus1.isin(
+ buses.index) ,'s_nom_extendable'] = True
+ network.transformers.loc[network.transformers.bus0.isin(
+ buses.index) | network.transformers.bus1.isin(
+ buses.index) ,'s_nom_min'] = network.transformers.s_nom
+ network.transformers.loc[network.transformers.bus0.isin(
+ buses.index) | network.transformers.bus1.isin(
+ buses.index) ,'s_nom_max'] = float("inf")
- if 'transformers' in extendable:
+ if not network.links.empty:
+ network.links.loc[(network.links.bus0.isin(buses.index)) |
+ (network.links.bus1.isin(buses.index)),
+ 'p_nom_extendable'] = True
+ network.links.loc[(network.links.bus0.isin(buses.index)) |
+ (network.links.bus1.isin(buses.index)),
+ 'p_nom_min'] = network.links.p_nom
+ network.links.loc[(network.links.bus0.isin(buses.index)) |
+ (network.links.bus1.isin(buses.index)),
+ 'p_nom_max'] = float("inf")
+
+ network = set_line_costs(network)
+ network = set_trafo_costs(network)
+
+
+ if 'transformers' in args['extendable']:
network.transformers.s_nom_extendable = True
network.transformers.s_nom_min = network.transformers.s_nom
network.transformers.s_nom_max = float("inf")
network = set_trafo_costs(network)
- if 'storages' in extendable:
+ if 'storages' in args['extendable'] or 'storage' in args['extendable']:
if network.storage_units.\
carrier[network.
storage_units.carrier ==
@@ -66,39 +152,171 @@ def extendable(network, extendable, overlay_scn_name=None):
'extendable_storage',
'p_nom_extendable'] = True
- if 'generators' in extendable:
+ if 'generators' in args['extendable']:
network.generators.p_nom_extendable = True
network.generators.p_nom_min = network.generators.p_nom
network.generators.p_nom_max = float("inf")
-# Extension settings for extension-NEP 2305 scenarios
-
- if 'NEP Zubaunetz' in extendable:
- network.lines.loc[(network.lines.project != 'EnLAG') & (
- network.lines.scn_name == 'extension_' + overlay_scn_name),
- 's_nom_extendable'] = True
- network.transformers.loc[(network.transformers.project != 'EnLAG') & (
- network.transformers.scn_name == ('extension_'+ overlay_scn_name)),
+ # Extension settings for extension-NEP 2035 scenarios
+ if 'NEP Zubaunetz' in args['extendable']:
+ for i in range(len(args['scn_extension'])):
+ network.lines.loc[(network.lines.project != 'EnLAG') & (
+ network.lines.scn_name == 'extension_' + args['scn_extension'][i]),
's_nom_extendable'] = True
- network.links.loc[network.links.scn_name == (
- 'extension_' + overlay_scn_name), 'p_nom_extendable'] = True
-
- if 'overlay_network' in extendable:
- network.lines.loc[network.lines.scn_name == (
- 'extension_' + overlay_scn_name), 's_nom_extendable'] = True
- network.links.loc[network.links.scn_name == (
- 'extension_' + overlay_scn_name), 'p_nom_extendable'] = True
- network.transformers.loc[network.transformers.scn_name == (
- 'extension_' + overlay_scn_name), 's_nom_extendable'] = True
-
- if 'overlay_lines' in extendable:
- network.lines.loc[network.lines.scn_name == (
- 'extension_' + overlay_scn_name), 's_nom_extendable'] = True
- network.links.loc[network.links.scn_name == (
- 'extension_' + overlay_scn_name), 'p_nom_extendable'] = True
- network.lines.loc[network.lines.scn_name == (
- 'extension_' + overlay_scn_name),
- 'capital_cost'] = network.lines.capital_cost + (2 * 14166)
+
+ network.transformers.loc[(
+ network.transformers.project != 'EnLAG') & (
+ network.transformers.scn_name == (
+ 'extension_'+ args['scn_extension'][i])),
+ 's_nom_extendable'] = True
+
+ network.links.loc[network.links.scn_name == (
+ 'extension_' + args['scn_extension'][i]
+ ), 'p_nom_extendable'] = True
+
+ if 'overlay_network' in args['extendable']:
+ for i in range(len(args['scn_extension'])):
+ network.lines.loc[network.lines.scn_name == (
+ 'extension_' + args['scn_extension'][i]
+ ), 's_nom_extendable'] = True
+
+ network.links.loc[network.links.scn_name == (
+ 'extension_' + args['scn_extension'][i]
+ ), 'p_nom_extendable'] = True
+
+ network.transformers.loc[network.transformers.scn_name == (
+ 'extension_' + args['scn_extension'][i]
+ ), 's_nom_extendable'] = True
+
+ if 'overlay_lines' in args['extendable']:
+ for i in range(len(args['scn_extension'])):
+ network.lines.loc[network.lines.scn_name == (
+ 'extension_' + args['scn_extension'][i]
+ ), 's_nom_extendable'] = True
+
+ network.links.loc[network.links.scn_name == (
+ 'extension_' + args['scn_extension'][i]
+ ), 'p_nom_extendable'] = True
+
+ network.lines.loc[network.lines.scn_name == (
+ 'extension_' + args['scn_extension'][i]),
+ 'capital_cost'] = network.lines.capital_cost + (2 * 14166)
+
+ network.lines.s_nom_min[network.lines.s_nom_extendable == False] =\
+ network.lines.s_nom
+
+ network.transformers.s_nom_min[network.transformers.s_nom_extendable == \
+ False] = network.transformers.s_nom
+
+ network.lines.s_nom_max[network.lines.s_nom_extendable == False] =\
+ network.lines.s_nom
+
+ network.transformers.s_nom_max[network.transformers.s_nom_extendable == \
+ False] = network.transformers.s_nom
return network
+
+def extension_preselection(network, args, method, days = 3):
+
+ """
+ Function that preselects lines which are extendend in snapshots leading to
+ overloading to reduce nubmer of extension variables.
+
+ Parameters
+ ----------
+ network : :class:`pypsa.Network
+ Overall container of PyPSA
+ args : dict
+ Arguments set in appl.py
+ method: str
+ Choose method of selection:
+ 'extreme_situations' for remarkable timsteps
+ (e.g. minimal resiudual load)
+ 'snapshot_clustering' for snapshot clustering with number of days
+ days: int
+ Number of clustered days, only used when method = 'snapshot_clustering'
+
+ Returns
+ -------
+ network : :class:`pypsa.Network
+ Overall container of PyPSA
+ """
+
+ weighting = network.snapshot_weightings
+
+ if method == 'extreme_situations':
+ snapshots = find_snapshots(network, 'residual load')
+ snapshots = snapshots.append(find_snapshots(network, 'wind_onshore'))
+ snapshots = snapshots.append(find_snapshots(network, 'solar'))
+ snapshots = snapshots.drop_duplicates()
+ snapshots = snapshots.sort_values()
+
+ if method == 'snapshot_clustering':
+ network_cluster = snapshot_clustering(network, how='daily',
+ clusters=days)
+ snapshots = network_cluster.snapshots
+ network.snapshot_weightings = network_cluster.snapshot_weightings
+
+ # Set all lines and trafos extendable in network
+ network.lines.loc[:, 's_nom_extendable'] = True
+ network.lines.loc[:, 's_nom_min'] = network.lines.s_nom
+ network.lines.loc[:, 's_nom_max'] = np.inf
+
+ network.links.loc[:, 'p_nom_extendable'] = True
+ network.links.loc[:, 'p_nom_min'] = network.links.p_nom
+ network.links.loc[:, 'p_nom_max'] = np.inf
+
+ network.transformers.loc[:, 's_nom_extendable'] = True
+ network.transformers.loc[:, 's_nom_min'] = network.transformers.s_nom
+ network.transformers.loc[:, 's_nom_max'] = np.inf
+
+ network = set_line_costs(network)
+ network = set_trafo_costs(network)
+ network = convert_capital_costs(network, 1, 1)
+ extended_lines = network.lines.index[network.lines.s_nom_opt >
+ network.lines.s_nom]
+ extended_links = network.links.index[network.links.p_nom_opt >
+ network.links.p_nom]
+
+ x = time.time()
+ for i in range(int(snapshots.value_counts().sum())):
+ if i > 0:
+ network.lopf(snapshots[i], solver_name=args['solver'])
+ extended_lines = extended_lines.append(
+ network.lines.index[network.lines.s_nom_opt >
+ network.lines.s_nom])
+ extended_lines = extended_lines.drop_duplicates()
+ extended_links = extended_links.append(
+ network.links.index[network.links.p_nom_opt >
+ network.links.p_nom])
+ extended_links = extended_links.drop_duplicates()
+
+ print("Number of preselected lines: ", len(extended_lines))
+
+ network.lines.loc[~network.lines.index.isin(extended_lines),
+ 's_nom_extendable'] = False
+ network.lines.loc[network.lines.s_nom_extendable, 's_nom_min']\
+ = network.lines.s_nom
+ network.lines.loc[network.lines.s_nom_extendable, 's_nom_max']\
+ = np.inf
+
+ network.links.loc[~network.links.index.isin(extended_links),
+ 'p_nom_extendable'] = False
+ network.links.loc[network.links.p_nom_extendable, 'p_nom_min']\
+ = network.links.p_nom
+ network.links.loc[network.links.p_nom_extendable, 'p_nom_max']\
+ = np.inf
+
+ network.snapshot_weightings = weighting
+ network = set_line_costs(network)
+ network = set_trafo_costs(network)
+ network = convert_capital_costs(network, args['start_snapshot'],
+ args['end_snapshot'])
+
+ y = time.time()
+ z1st = (y - x) / 60
+
+ print("Time for first LOPF [min]:", round(z1st, 2))
+
+ return network
diff --git a/etrago/tools/io.py b/etrago/tools/io.py
index dc55f598..f53e9d6b 100644
--- a/etrago/tools/io.py
+++ b/etrago/tools/io.py
@@ -51,16 +51,17 @@
import pypsa
from importlib import import_module
import pandas as pd
-from sqlalchemy.orm.exc import NoResultFound
-from sqlalchemy import and_, func, or_
from collections import OrderedDict
import re
import json
import os
-from geoalchemy2.shape import to_shape
+import numpy as np
+if 'READTHEDOCS' not in os.environ:
+ from geoalchemy2.shape import to_shape
+ from sqlalchemy.orm.exc import NoResultFound
+ from sqlalchemy import and_, func, or_
#from etrago.tools.nep import add_by_scenario, add_series_by_scenario
-import numpy as np
packagename = 'egoio.db_tables'
temp_ormclass = 'TempResolution'
@@ -254,8 +255,7 @@ def fetch_by_relname(self, name):
if name == 'Link':
df['bus0'] = df.bus0.astype(int)
df['bus1'] = df.bus1.astype(int)
-
-
+
if 'source' in df:
df.source = df.source.map(self.id_to_source())
@@ -312,24 +312,23 @@ def series_fetch_by_relname(self, name, column):
return df
- def build_network(self, network = None, *args, **kwargs):
+ def build_network(self, network=None, *args, **kwargs):
""" Core method to construct PyPSA Network object.
"""
# TODO: build_network takes care of divergences in database design and
# future PyPSA changes from PyPSA's v0.6 on. This concept should be
# replaced, when the oedb has a revision system in place, because
# sometime this will break!!!
-
- if network != None :
- network = network
-
+
+ if network != None:
+ network = network
+
else:
network = pypsa.Network()
network.set_snapshots(self.timeindex)
timevarying_override = False
-
if pypsa.__version__ == '0.11.0':
old_to_new_name = {'Generator':
{'p_min_pu_fixed': 'p_min_pu',
@@ -402,24 +401,25 @@ def build_network(self, network = None, *args, **kwargs):
self.network = network
return network
-
+
+
def clear_results_db(session):
'''Used to clear the result tables in the OEDB. Caution!
This deletes EVERY RESULT SET!'''
-
+
from egoio.db_tables.model_draft import EgoGridPfHvResultBus as BusResult,\
- EgoGridPfHvResultBusT as BusTResult,\
- EgoGridPfHvResultStorage as StorageResult,\
- EgoGridPfHvResultStorageT as StorageTResult,\
- EgoGridPfHvResultGenerator as GeneratorResult,\
- EgoGridPfHvResultGeneratorT as GeneratorTResult,\
- EgoGridPfHvResultLine as LineResult,\
- EgoGridPfHvResultLineT as LineTResult,\
- EgoGridPfHvResultLoad as LoadResult,\
- EgoGridPfHvResultLoadT as LoadTResult,\
- EgoGridPfHvResultTransformer as TransformerResult,\
- EgoGridPfHvResultTransformerT as TransformerTResult,\
- EgoGridPfHvResultMeta as ResultMeta
+ EgoGridPfHvResultBusT as BusTResult,\
+ EgoGridPfHvResultStorage as StorageResult,\
+ EgoGridPfHvResultStorageT as StorageTResult,\
+ EgoGridPfHvResultGenerator as GeneratorResult,\
+ EgoGridPfHvResultGeneratorT as GeneratorTResult,\
+ EgoGridPfHvResultLine as LineResult,\
+ EgoGridPfHvResultLineT as LineTResult,\
+ EgoGridPfHvResultLoad as LoadResult,\
+ EgoGridPfHvResultLoadT as LoadTResult,\
+ EgoGridPfHvResultTransformer as TransformerResult,\
+ EgoGridPfHvResultTransformerT as TransformerTResult,\
+ EgoGridPfHvResultMeta as ResultMeta
print('Are you sure that you want to clear all results in the OEDB?')
choice = ''
while choice not in ['y', 'n']:
@@ -448,10 +448,10 @@ def clear_results_db(session):
else:
print('Deleting aborted!')
else:
- print('Deleting aborted!')
-
+ print('Deleting aborted!')
-def results_to_oedb(session, network, args, grid='hv', safe_results = False):
+
+def results_to_oedb(session, network, args, grid='hv', safe_results=False):
"""Return results obtained from PyPSA to oedb"""
# moved this here to prevent error when not using the mv-schema
import datetime
@@ -459,81 +459,87 @@ def results_to_oedb(session, network, args, grid='hv', safe_results = False):
print('MV currently not implemented')
elif grid.lower() == 'hv':
from egoio.db_tables.model_draft import EgoGridPfHvResultBus as BusResult,\
- EgoGridPfHvResultBusT as BusTResult,\
- EgoGridPfHvResultStorage as StorageResult,\
- EgoGridPfHvResultStorageT as StorageTResult,\
- EgoGridPfHvResultGenerator as GeneratorResult,\
- EgoGridPfHvResultGeneratorT as GeneratorTResult,\
- EgoGridPfHvResultLine as LineResult,\
- EgoGridPfHvResultLineT as LineTResult,\
- EgoGridPfHvResultLoad as LoadResult,\
- EgoGridPfHvResultLoadT as LoadTResult,\
- EgoGridPfHvResultTransformer as TransformerResult,\
- EgoGridPfHvResultTransformerT as TransformerTResult,\
- EgoGridPfHvResultMeta as ResultMeta,\
- EgoGridPfHvSource as Source
+ EgoGridPfHvResultBusT as BusTResult,\
+ EgoGridPfHvResultStorage as StorageResult,\
+ EgoGridPfHvResultStorageT as StorageTResult,\
+ EgoGridPfHvResultGenerator as GeneratorResult,\
+ EgoGridPfHvResultGeneratorT as GeneratorTResult,\
+ EgoGridPfHvResultLine as LineResult,\
+ EgoGridPfHvResultLineT as LineTResult,\
+ EgoGridPfHvResultLoad as LoadResult,\
+ EgoGridPfHvResultLoadT as LoadTResult,\
+ EgoGridPfHvResultTransformer as TransformerResult,\
+ EgoGridPfHvResultTransformerT as TransformerTResult,\
+ EgoGridPfHvResultMeta as ResultMeta,\
+ EgoGridPfHvSource as Source
else:
print('Please enter mv or hv!')
-
+
print('Uploading results to db...')
# get last result id and get new one
last_res_id = session.query(func.max(ResultMeta.result_id)).scalar()
if last_res_id == None:
new_res_id = 1
- else:
+ else:
new_res_id = last_res_id + 1
-
- # result meta data
+
+ # result meta data
res_meta = ResultMeta()
meta_misc = []
for arg, value in args.items():
- if arg not in dir(res_meta) and arg not in ['db','lpfile','results','export']:
- meta_misc.append([arg,str(value)])
+ if arg not in dir(res_meta) and arg not in ['db', 'lpfile', 'results', 'export']:
+ meta_misc.append([arg, str(value)])
- res_meta.result_id=new_res_id
- res_meta.scn_name=args['scn_name']
- res_meta.calc_date= datetime.datetime.now()
+ res_meta.result_id = new_res_id
+ res_meta.scn_name = args['scn_name']
+ res_meta.calc_date = datetime.datetime.now()
res_meta.user_name = args['user_name']
- res_meta.method=args['method']
+ res_meta.method = args['method']
res_meta.start_snapshot = args['start_snapshot']
res_meta.end_snapshot = args['end_snapshot']
res_meta.safe_results = safe_results
res_meta.snapshots = network.snapshots.tolist()
res_meta.solver = args['solver']
- res_meta.settings=meta_misc
-
+ res_meta.settings = meta_misc
+
session.add(res_meta)
session.commit()
- #get source_id
- sources = pd.read_sql(session.query(Source).statement,session.bind)
+ # get source_id
+ sources = pd.read_sql(session.query(Source).statement, session.bind)
for gen in network.generators.index:
if network.generators.carrier[gen] not in sources.name.values:
new_source = Source()
- new_source.source_id = session.query(func.max(Source.source_id)).scalar()+1
+ new_source.source_id = session.query(
+ func.max(Source.source_id)).scalar()+1
new_source.name = network.generators.carrier[gen]
session.add(new_source)
session.commit()
- sources = pd.read_sql(session.query(Source).statement,session.bind)
+ sources = pd.read_sql(session.query(Source).statement, session.bind)
try:
- old_source_id = int(sources.source_id[sources.name == network.generators.carrier[gen]])
+ old_source_id = int(
+ sources.source_id[sources.name == network.generators.carrier[gen]])
network.generators.set_value(gen, 'source', int(old_source_id))
except:
- print('Source ' + network.generators.carrier[gen] + ' is not in the source table!')
+ print(
+ 'Source ' + network.generators.carrier[gen] + ' is not in the source table!')
for stor in network.storage_units.index:
if network.storage_units.carrier[stor] not in sources.name.values:
new_source = Source()
- new_source.source_id = session.query(func.max(Source.source_id)).scalar()+1
+ new_source.source_id = session.query(
+ func.max(Source.source_id)).scalar()+1
new_source.name = network.storage_units.carrier[stor]
session.add(new_source)
session.commit()
- sources = pd.read_sql(session.query(Source).statement,session.bind)
+ sources = pd.read_sql(session.query(Source).statement, session.bind)
try:
- old_source_id = int(sources.source_id[sources.name == network.storage_units.carrier[stor]])
- network.storage_units.set_value(stor, 'source', int(old_source_id))
+ old_source_id = int(
+ sources.source_id[sources.name == network.storage_units.carrier[stor]])
+ network.storage_units.set_value(stor, 'source', int(old_source_id))
except:
- print('Source ' + network.storage_units.carrier[stor] + ' is not in the source table!')
-
+ print(
+ 'Source ' + network.storage_units.carrier[stor] + ' is not in the source table!')
+
whereismyindex = {BusResult: network.buses.index,
LoadResult: network.loads.index,
LineResult: network.lines.index,
@@ -546,7 +552,7 @@ def results_to_oedb(session, network, args, grid='hv', safe_results = False):
TransformerTResult: network.transformers.index,
StorageTResult: network.storage_units.index,
GeneratorTResult: network.generators.index}
-
+
whereismydata = {BusResult: network.buses,
LoadResult: network.loads,
LineResult: network.lines,
@@ -559,17 +565,17 @@ def results_to_oedb(session, network, args, grid='hv', safe_results = False):
TransformerTResult: network.transformers_t,
StorageTResult: network.storage_units_t,
GeneratorTResult: network.generators_t}
-
+
new_to_old_name = {'p_min_pu_fixed': 'p_min_pu',
- 'p_max_pu_fixed': 'p_max_pu',
- 'dispatch': 'former_dispatch',
- 'current_type': 'carrier',
- 'soc_cyclic': 'cyclic_state_of_charge',
- 'soc_initial': 'state_of_charge_initial'}
-
- ormclasses = [BusResult,LoadResult,LineResult,TransformerResult,GeneratorResult,StorageResult,
- BusTResult,LoadTResult,LineTResult,TransformerTResult,GeneratorTResult,StorageTResult]
-
+ 'p_max_pu_fixed': 'p_max_pu',
+ 'dispatch': 'former_dispatch',
+ 'current_type': 'carrier',
+ 'soc_cyclic': 'cyclic_state_of_charge',
+ 'soc_initial': 'state_of_charge_initial'}
+
+ ormclasses = [BusResult, LoadResult, LineResult, TransformerResult, GeneratorResult, StorageResult,
+ BusTResult, LoadTResult, LineTResult, TransformerTResult, GeneratorTResult, StorageTResult]
+
for ormclass in ormclasses:
for index in whereismyindex[ormclass]:
myinstance = ormclass()
@@ -580,73 +586,80 @@ def results_to_oedb(session, network, args, grid='hv', safe_results = False):
if '_id' in col:
class_id_name = col
else:
- continue
+ continue
setattr(myinstance, class_id_name, index)
columns.remove(class_id_name)
-
+
if str(ormclass)[:-2].endswith('T'):
for col in columns:
- if col =='soc_set':
+ if col == 'soc_set':
try:
- setattr(myinstance, col, getattr(whereismydata[ormclass],'state_of_charge_set')[index].tolist())
+ setattr(myinstance, col, getattr(
+ whereismydata[ormclass], 'state_of_charge_set')[index].tolist())
except:
pass
- else:
+ else:
try:
- setattr(myinstance, col, getattr(whereismydata[ormclass],col)[index].tolist())
+ setattr(myinstance, col, getattr(
+ whereismydata[ormclass], col)[index].tolist())
except:
pass
session.add(myinstance)
-
- else:
+
+ else:
for col in columns:
if col in new_to_old_name:
if col == 'soc_cyclic':
try:
- setattr(myinstance, col, bool(whereismydata[ormclass].loc[index, new_to_old_name[col]]))
+ setattr(myinstance, col, bool(
+ whereismydata[ormclass].loc[index, new_to_old_name[col]]))
except:
pass
elif 'Storage' in str(ormclass) and col == 'dispatch':
try:
- setattr(myinstance, col, whereismydata[ormclass].loc[index, col])
+ setattr(myinstance, col,
+ whereismydata[ormclass].loc[index, col])
except:
- pass
+ pass
else:
try:
- setattr(myinstance, col, whereismydata[ormclass].loc[index, new_to_old_name[col]])
+ setattr(
+ myinstance, col, whereismydata[ormclass].loc[index, new_to_old_name[col]])
except:
pass
- elif col in ['s_nom_extendable','p_nom_extendable']:
+ elif col in ['s_nom_extendable', 'p_nom_extendable']:
try:
- setattr(myinstance, col, bool(whereismydata[ormclass].loc[index, col]))
+ setattr(myinstance, col, bool(
+ whereismydata[ormclass].loc[index, col]))
except:
pass
else:
try:
- setattr(myinstance, col, whereismydata[ormclass].loc[index, col])
+ setattr(myinstance, col,
+ whereismydata[ormclass].loc[index, col])
except:
pass
session.add(myinstance)
-
+
session.commit()
print('Upload finished!')
-
+
return
+
def run_sql_script(conn, scriptname='results_md2grid.sql'):
"""This function runs .sql scripts in the folder 'sql_scripts' """
-
+
script_dir = os.path.abspath(
- os.path.join(os.path.dirname(__file__), 'sql_scripts'))
+ os.path.join(os.path.dirname(__file__), 'sql_scripts'))
script_str = open(os.path.join(script_dir, scriptname)).read()
conn.execution_options(autocommit=True).execute(script_str)
-
+
return
-
-def extension (network, session, scn_extension, start_snapshot, end_snapshot, k_mean_clustering, **kwargs):
+def extension (network, session, version, scn_extension, start_snapshot, end_snapshot, **kwargs):
'''
Function that adds an additional network to the existing network container.
The new network can include every PyPSA-component (e.g. buses, lines, links).
@@ -654,66 +667,63 @@ def extension (network, session, scn_extension, start_snapshot, end_snapshot, k_
All components and its timeseries of the additional scenario need to be inserted in the fitting 'model_draft.ego_grid_pf_hv_extension_' table.
The scn_name in the tables have to be labled with 'extension_' + scn_name (e.g. 'extension_nep2035').
-
+
Until now, the tables include three additional scenarios:
'nep2035_confirmed': all new lines and needed transformers planed in the 'Netzentwicklungsplan 2035' (NEP2035) that have been confirmed by the Bundesnetzagentur (BNetzA)
-
+
'nep2035_b2': all new lines and needed transformers planned in the NEP 2035 in the scenario 2035 B2
-
- 'BE_NO_NEP 2035': DC-lines and transformers to connect the upcomming electrical-neighbours Belgium and Norway
+
+ 'BE_NO_NEP 2035': DC-lines and transformers to connect the upcomming electrical-neighbours Belgium and Norway
Generation, loads and its timeseries in Belgium and Norway for scenario 'NEP 2035'
-
+
Input
-----
network : The existing network container (e.g. scenario 'NEP 2035')
session : session-data
overlay_scn_name : Name of the additional scenario (WITHOUT 'extension_')
- start_snapshot, end_snapshot: Simulation time
-
+ start_snapshot, end_snapshot: Simulation time
+
Output
------
network : Network container including existing and additional network
-
+
'''
- ### Adding overlay-network to existing network
+
+ if version is None:
+ ormcls_prefix = 'EgoGridPfHvExtension'
+ else:
+ ormcls_prefix = 'EgoPfHvExtension'
+
+ # Adding overlay-network to existing network
scenario = NetworkScenario(session,
- version=None,
- prefix='EgoGridPfHvExtension',
+ version = version,
+ prefix=ormcls_prefix,
method=kwargs.get('method', 'lopf'),
start_snapshot=start_snapshot,
end_snapshot=end_snapshot,
- scn_name='extension_' + scn_extension )
+ scn_name='extension_' + scn_extension)
network = scenario.build_network(network)
-
- ### Allow lossless links to conduct bidirectional
+
+ # Allow lossless links to conduct bidirectional
network.links.loc[network.links.efficiency == 1.0, 'p_min_pu'] = -1
-
- ### Set coordinates for new buses
- extension_buses = network.buses[network.buses.scn_name =='extension_' + scn_extension ]
+
+ # Set coordinates for new buses
+ extension_buses = network.buses[network.buses.scn_name ==
+ 'extension_' + scn_extension]
for idx, row in extension_buses.iterrows():
wkt_geom = to_shape(row['geom'])
network.buses.loc[idx, 'x'] = wkt_geom.x
network.buses.loc[idx, 'y'] = wkt_geom.y
-
- network.transformers = network.transformers[network.transformers.bus1.astype(str).isin(network.buses.index)]
-
- ### Reconnect trafos without buses due to kmean_clustering to existing buses and set s_nom_min and s_nom_max so decomissioning is not needed
- if not k_mean_clustering == False:
- network.transformers.loc[~network.transformers.bus0.isin(network.buses.index), 'bus0'] = (network.transformers.bus1[~network.transformers.bus0.isin(network.buses.index)]).apply(calc_nearest_point, network = network)
- network.lines.loc[network.lines.scn_name == ('extension_' + scn_extension), 's_nom_max'] = network.lines.s_nom_max - network.lines.s_nom_min
- network.lines.loc[network.lines.scn_name == ('extension_' + scn_extension), 's_nom'] = network.lines.s_nom_max
- network.lines.loc[network.lines.scn_name == ('extension_' + scn_extension), 's_nom_min'] = 0
-
+
return network
-def decommissioning(network, session, scn_decommissioning, k_mean_clustering):
+def decommissioning(network, session, version, scn_decommissioning, **kwargs):
'''
- Function that removes components in a decommissioning-scenario from the existing network container.
+ Function that removes components in a decommissioning-scenario from the existing network container.
Currently, only lines can be decommissioned.
- In future release, every PyPSA-component (e.g. buses, lines, links) can be decommissioned.
-
+
All components of the decommissioning scenario need to be inserted in the fitting 'model_draft.ego_grid_pf_hv_extension_' table.
The scn_name in the tables have to be labled with 'decommissioning_' + scn_name (e.g. 'decommissioning_nep2035').
@@ -723,101 +733,132 @@ def decommissioning(network, session, scn_decommissioning, k_mean_clustering):
network : The existing network container (e.g. scenario 'NEP 2035')
session : session-data
overlay_scn_name : Name of the decommissioning scenario (WITHOUT 'decommissioning_')
-
-
+
+
Output
------
network : Network container including decommissioning
'''
- if not k_mean_clustering:
+ """components = ['Line', 'Link']
+ for comp in components:
+ if version == None:
+ ormclass = getattr(import_module('egoio.db_tables.model_draft'), ('EgoGridPfHvExtension' + comp))
+ else:
+ ormclass = getattr(import_module('egoio.db_tables.grid'), 'EgoPfHvExtension' + comp)
+
+ query = session.query(ormclass).filter(ormclass.scn_name ==\
+ 'decommissioning_' + scn_decommissioning)
+
+ df_decommisionning = pd.read_sql(query.statement,
+ session.bind,
+ index_col=comp.lower() + '_id'))
+
+ df_decommisionning.index = df_decommisionning.index.astype(str)
+ Wie kann network.lines, network.links in Schleife geschrieben werden?
+
+ """
+
+ if version == None:
ormclass = getattr(import_module('egoio.db_tables.model_draft'), 'EgoGridPfHvExtensionLine')
+ else:
+ ormclass = getattr(import_module('egoio.db_tables.grid'), 'EgoPfHvExtensionLine')
+
- query = session.query(ormclass).filter(
+ query = session.query(ormclass).filter(
ormclass.scn_name == 'decommissioning_' + scn_decommissioning)
- df_decommisionning = pd.read_sql(query.statement,
+
+ df_decommisionning = pd.read_sql(query.statement,
session.bind,
index_col='line_id')
- df_decommisionning.index = df_decommisionning.index.astype(str)
+ df_decommisionning.index = df_decommisionning.index.astype(str)
- ### Drop lines from existing network, if they will be decommisioned
- network.lines = network.lines[~network.lines.index.isin(df_decommisionning.index)]
+ ### Drop decommissioning-lines from existing network
+ network.lines = network.lines[~network.lines.index.isin(df_decommisionning.index)]
return network
-
-def distance (x0, x1, y0, y1):
+def distance(x0, x1, y0, y1):
'''
- Function that calculates the square of the distance between two points.
-
-
+ Function that calculates the square of the distance between two points.
+
+
Input
-----
x0: x - coordinate of point 0
x1: x - coordinate of point 1
y0: y - coordinate of point 0
y1: y - coordinate of point 1
-
-
+
+
Output
------
distance : square of distance
'''
- ### Calculate square of the distance between two points (Pythagoras)
- distance = (x1.values- x0.values)*(x1.values- x0.values) + (y1.values- y0.values)*(y1.values- y0.values)
+ # Calculate square of the distance between two points (Pythagoras)
+ distance = (x1.values- x0.values)*(x1.values- x0.values)\
+ + (y1.values- y0.values)*(y1.values- y0.values)
return distance
+
def calc_nearest_point(bus1, network):
'''
- Function that finds the geographical nearest point in a network from a given bus.
-
-
+ Function that finds the geographical nearest point in a network from a given bus.
+
+
Input
-----
- bus1: id of bus
+ bus1: id of bus
network: network container including the comparable buses
-
-
+
+
Output
------
bus0 : bus_id of nearest point
-
- '''
+
+ '''
bus1_index = network.buses.index[network.buses.index == bus1]
-
- forbidden_buses = np.append(bus1_index.values, network.lines.bus1[network.lines.bus0 == bus1].values)
-
- forbidden_buses = np.append(forbidden_buses, network.lines.bus0[network.lines.bus1 == bus1].values)
-
- forbidden_buses = np.append(forbidden_buses, network.links.bus0[network.links.bus1 == bus1].values)
-
- forbidden_buses = np.append(forbidden_buses, network.links.bus1[network.links.bus0 == bus1].values)
-
+
+ forbidden_buses = np.append(
+ bus1_index.values, network.lines.bus1[network.lines.bus0 == bus1].values)
+
+ forbidden_buses = np.append(
+ forbidden_buses, network.lines.bus0[network.lines.bus1 == bus1].values)
+
+ forbidden_buses = np.append(
+ forbidden_buses, network.links.bus0[network.links.bus1 == bus1].values)
+
+ forbidden_buses = np.append(
+ forbidden_buses, network.links.bus1[network.links.bus0 == bus1].values)
+
x0 = network.buses.x[network.buses.index.isin(bus1_index)]
-
+
y0 = network.buses.y[network.buses.index.isin(bus1_index)]
-
+
comparable_buses = network.buses[~network.buses.index.isin(forbidden_buses)]
x1 = comparable_buses.x
y1 = comparable_buses.y
-
- distance = (x1.values- x0.values)*(x1.values- x0.values) + (y1.values- y0.values)*(y1.values- y0.values)
-
+
+ distance = (x1.values - x0.values)*(x1.values - x0.values) + \
+ (y1.values - y0.values)*(y1.values - y0.values)
+
min_distance = distance.min()
-
- bus0 = comparable_buses[(((x1.values- x0.values)*(x1.values- x0.values) + (y1.values- y0.values)*(y1.values- y0.values)) == min_distance) ]
+
+ bus0 = comparable_buses[(((x1.values - x0.values)*(x1.values - x0.values) + (
+ y1.values - y0.values)*(y1.values - y0.values)) == min_distance)]
bus0 = bus0.index[bus0.index == bus0.index.max()]
bus0 = ''.join(bus0.values)
return bus0
+
if __name__ == '__main__':
if pypsa.__version__ not in ['0.6.2', '0.11.0']:
print('Pypsa version %s not supported.' % pypsa.__version__)
diff --git a/etrago/tools/plot.py b/etrago/tools/plot.py
index e5d9a6c9..119c8ef3 100644
--- a/etrago/tools/plot.py
+++ b/etrago/tools/plot.py
@@ -29,7 +29,9 @@
import pandas as pd
import numpy as np
import time
+import math
from math import sqrt, log10
+
if 'READTHEDOCS' not in os.environ:
from geoalchemy2.shape import to_shape
@@ -40,6 +42,12 @@
__license__ = "GNU Affero General Public License Version 3 (AGPL-3.0)"
__author__ = "ulfmueller, MarlonSchlemminger, mariusves, lukasol"
+basemap_present = True
+try:
+ from mpl_toolkits.basemap import Basemap
+except:
+ basemap_present = False
+
def add_coordinates(network):
"""
@@ -94,58 +102,89 @@ def plot_line_loading(
boundaries=[],
arrows=False):
"""
- Plot line loading as color on lines
+ Plots line loading as a colored heatmap.
- Displays line loading relative to nominal capacity
+ Line loading is displayed as relative to nominal capacity in %.
Parameters
----------
network : PyPSA network container
Holds topology of grid including results from powerflow analysis
+ timesteps : range
+ Defines which timesteps are considered. If more than one, an
+ average line loading is calculated.
filename : str
Specify filename
If not given, figure will be show directly
+ boundaries : list
+ If given, the colorbar is fixed to a given min and max value
+ arrows : bool
+ If True, the direction of the power flows is displayed as
+ arrows.
"""
# TODO: replace p0 by max(p0,p1) and analogously for q0
# TODO: implement for all given snapshots
# calculate relative line loading as S/S_nom
# with S = sqrt(P^2 + Q^2)
- x = time.time()
cmap = plt.cm.jet
+ array_line = [['Line'] * len(network.lines), network.lines.index]
+ array_link = [['Link'] * len(network.links), network.links.index]
+
if network.lines_t.q0.empty:
- array_line = [['Line'] * len(network.lines), network.lines.index]
loading_lines = pd.Series((network.lines_t.p0.mul(
network.snapshot_weightings, axis=0).loc[network.snapshots[
- timesteps]].abs().sum() / (network.lines.s_nom)).data,
+ timesteps]].abs().sum() / (network.lines.s_nom_opt)).data,
index=array_line)
- load_lines_rel = (
- loading_lines / network.snapshot_weightings\
- [network.snapshots[timesteps]].sum()) * 100
-
- array_link = [['Link'] * len(network.links), network.links.index]
-
- loading_links = pd.Series((network.links_t.p0.mul(
+ else:
+ loading_lines = pd.Series(((network.lines_t.p0.mul(
+ network.snapshot_weightings, axis=0)\
+ .loc[network.snapshots[timesteps]].abs().sum() ** 2 +\
+ network.lines_t.q0.mul(
+ network.snapshot_weightings, axis=0)\
+ .loc[network.snapshots[timesteps]].abs().sum() ** 2).\
+ apply(sqrt) / (network.lines.s_nom_opt)).data, index =
+ array_line)
+
+ # Aviod covering of bidirectional links
+ network.links['linked_to'] = 0
+ for i, row in network.links.iterrows():
+ if not (network.links.index[(network.links.bus0 == row['bus1']) &
+ (network.links.bus1 == row['bus0']) &
+ (network.links.length == row['length']
+ )]).empty:
+
+ l = network.links.index[(network.links.bus0 == row['bus1']) &
+ (network.links.bus1 == row['bus0']) &
+ (network.links.length == row['length'])]
+
+ network.links.set_value(i, 'linked_to',l.values[0])
+
+ network.links.linked_to = network.links.linked_to.astype(str)
+ link_load = network.links_t.p0[network.links.index[
+ network.links.linked_to == '0']]
+
+ for i, row in network.links[network.links.linked_to != '0'].iterrows():
+ load = pd.DataFrame(index = network.links_t.p0.index,
+ columns = ['to', 'from'])
+ load['to'] = network.links_t.p0[row['linked_to']]
+ load['from'] = network.links_t.p0[i]
+ link_load[i] = load.abs().max(axis = 1)
+
+ loading_links = pd.Series((link_load.mul(
network.snapshot_weightings, axis=0).loc[network.snapshots[
- timesteps]].abs().sum() / (network.links.p_nom)).data,
- index=array_link)
+ timesteps]].abs().sum()[network.links.index] / (
+ network.links.p_nom_opt)).data, index=array_link).dropna()
- load_links_rel = (
- loading_links / network.snapshot_weightings\
+ load_links_rel = (loading_links/
+ network.snapshot_weightings\
[network.snapshots[timesteps]].sum())* 100
+
+ load_lines_rel = (loading_lines / network.snapshot_weightings\
+ [network.snapshots[timesteps]].sum()) * 100
- loading = load_lines_rel.append(load_links_rel)
-
- else:
- loading = ((network.lines_t.p0.mul(network.snapshot_weightings, axis=0)
- .loc[network.snapshots[timesteps]].abs().sum() ** 2 +
- network.lines_t.q0.mul(network.snapshot_weightings, axis=0)
- .loc[network.snapshots[timesteps]].abs().sum() ** 2).
- apply(sqrt) /((network.lines.s_nom)
- * network.snapshots[timesteps].size)) * 100
-
- # do the plotting
+ loading = load_lines_rel.append(load_links_rel)
ll = network.plot(line_colors=loading, line_cmap=cmap,
title="Line loading", line_widths=0.55)
@@ -160,9 +199,15 @@ def plot_line_loading(
cb = plt.colorbar(ll[1], boundaries=v,
ticks=v[0:101:10])
-
- cb.set_clim(vmin=boundaries[0], vmax=boundaries[1])
+ cb_Link = plt.colorbar(ll[2], boundaries=v,
+ ticks=v[0:101:10])
+ cb.set_clim(vmin=boundaries[0], vmax=boundaries[1])
+
+ cb_Link.set_clim(vmin=boundaries[0], vmax=boundaries[1])
+
+ cb_Link.remove()
+
cb.set_label('Line loading in %')
if arrows:
@@ -194,9 +239,6 @@ def plot_line_loading(
plt.savefig(filename)
plt.close()
- y = time.time()
- z = (y - x) / 60
- print(z)
def plot_line_loading_diff(networkA, networkB, timestep=0):
@@ -311,9 +353,25 @@ def shiftedColorMap(
cb = plt.colorbar(ll[1])
cb.set_label('Difference in line loading in % of s_nom')
+
-def extension_overlay_network(network, filename=None, boundaries=[0, 100]):
+def network_extension(network, method = 'rel', filename=None, boundaries=[]):
+ """Plot relative or absolute network extension of AC- and DC-lines.
+
+ Parameters
+ ----------
+ network: PyPSA network container
+ Holds topology of grid including results from powerflow analysis
+ method: str
+ Choose 'rel' for extension relative to s_nom and 'abs' for
+ absolute extensions.
+ filename: str or None
+ Save figure in this direction
+ boundaries: array
+ Set boundaries of heatmap axis
+
+ """
cmap = plt.cm.jet
@@ -325,41 +383,122 @@ def extension_overlay_network(network, filename=None, boundaries=[0, 100]):
array_line = [['Line'] * len(overlay_network.lines),
overlay_network.lines.index]
+
+ array_link = [['Link'] * len(overlay_network.links),
+ overlay_network.links.index]
+
+ if method == 'rel':
- extension_lines = pd.Series((100 *
+ extension_lines = pd.Series((100 *
(overlay_network.lines.s_nom_opt -
overlay_network.lines.s_nom_min) /
overlay_network.lines.s_nom).data,
index=array_line)
- array_link = [['Link'] * len(overlay_network.links),
- overlay_network.links.index]
-
- extension_links = pd.Series((100 *
- overlay_network.links.p_nom_opt /
+ extension_links = pd.Series((100 *
+ (overlay_network.links.p_nom_opt -
+ overlay_network.links.p_nom_min)/
(overlay_network.links.p_nom)).data,
index=array_link)
+ if method == 'abs':
+ extension_lines = pd.Series(
+ (overlay_network.lines.s_nom_opt -
+ overlay_network.lines.s_nom_min).data,
+ index=array_line)
+
+ extension_links = pd.Series(
+ (overlay_network.links.p_nom_opt -
+ overlay_network.links.p_nom_min).data,
+ index=array_link)
+
extension = extension_lines.append(extension_links)
-
- network.plot(line_colors="grey", bus_sizes=0, line_widths=0.55)
+
+ # Plot whole network in backgroud of plot
+ network.plot(
+ line_colors=pd.Series("grey", index = [['Line'] * len(
+ network.lines), network.lines.index]).append(
+ pd.Series("grey", index = [['Link'] * len(network.links),
+ network.links.index])),
+ bus_sizes=0,
+ line_widths=pd.Series(0.55, index = [['Line'] * len(network.lines),
+ network.lines.index]).append(
+ pd.Series(0.7, index = [['Link'] * len(network.links),
+ network.links.index])))
ll = overlay_network.plot(
line_colors=extension,
line_cmap=cmap,
bus_sizes=0,
title="Optimized AC- and DC-line extension",
- line_widths=2)
+ line_widths= pd.Series(0.7, index = array_line).append(
+ pd.Series(0.75, index = array_link)))
+
+ if not boundaries:
+ v = np.linspace(min(extension), max(extension), 101)
+ boundaries = [min(extension), max(extension)]
+
+ else:
+ v = np.linspace(boundaries[0], boundaries[1], 101)
+
+ cb = plt.colorbar(ll[1], boundaries=v,
+ ticks=v[0:101:10])
+
+ cb.set_clim(vmin=boundaries[0], vmax=boundaries[1])
+
+ if method == 'rel':
+ cb.set_label('line extension relative to s_nom in %')
+ if method == 'abs':
+ cb.set_label('line extension in MW')
+ if filename is None:
+ plt.show()
+ else:
+ plt.savefig(filename)
+ plt.close()
- v = np.linspace(boundaries[0], boundaries[1], 101)
+def network_extension_diff (networkA, networkB, filename=None, boundaries=[]):
+
+ cmap = plt.cm.jet
+
+ array_line = [['Line'] * len(networkA.lines), networkA.lines.index]
+
+ extension_lines = pd.Series(100 *\
+ ((networkA.lines.s_nom_opt - \
+ networkB.lines.s_nom_opt)/\
+ networkA.lines.s_nom_opt ).values,\
+ index=array_line)
+
+ array_link = [['Link'] * len(networkA.links), networkA.links.index]
+
+ extension_links = pd.Series(100 *
+ ((networkA.links.p_nom_opt -\
+ networkB.links.p_nom_opt)/\
+ networkA.links.p_nom_opt).values,\
+ index=array_link)
+
+ extension = extension_lines.append(extension_links)
+
+ ll = networkA.plot(
+ line_colors=extension,
+ line_cmap=cmap,
+ bus_sizes=0,
+ title="Derivation of AC- and DC-line extension",
+ line_widths=2)
+
+ if not boundaries:
+ v = np.linspace(min(extension).round(0), max(extension).round(0), 101)
+
+ else:
+ v = np.linspace(boundaries[0], boundaries[1], 101)
+
cb = plt.colorbar(ll[1], boundaries=v,
ticks=v[0:101:10])
cb_Link = plt.colorbar(ll[2], boundaries=v,
ticks=v[0:101:10])
- cb.set_clim(vmin=boundaries[0], vmax=boundaries[1])
- cb_Link.set_clim(vmin=boundaries[0], vmax=boundaries[1])
+
+ cb_Link.set_clim(vmin=min(v), vmax=max(v))
cb_Link.remove()
- cb.set_label('line extension relative to s_nom in %')
+ cb.set_label('line extension derivation in %')
if filename is None:
plt.show()
@@ -419,24 +558,43 @@ def full_load_hours(
plt.savefig(filename)
plt.close()
+def plot_q_flows(network ):
+ cmap_line = plt.cm.jet
+
+ q_flows_max = abs(network.lines_t.q0.abs().max()/(network.lines.s_nom))
+
+ ll = network.plot(line_colors = q_flows_max, line_cmap = cmap_line)
+ boundaries = [min(q_flows_max), max(q_flows_max)]
+ v = np.linspace(boundaries[0], boundaries[1], 101)
+
+ cb = plt.colorbar(ll[1], boundaries=v,
+ ticks=v[0:101:10])
+
+ cb.set_clim(vmin=boundaries[0], vmax=boundaries[1])
+
+
+def max_load(network, boundaries=[], filename=None, two_cb=False):
-def max_load(network, boundaries=[0, 100], filename=None, two_cb=False):
cmap_line = plt.cm.jet
cmap_link = plt.cm.jet
array_line = [['Line'] * len(network.lines), network.lines.index]
-
- load_lines = pd.Series((abs(network.lines_t.p0).max(
- ) / (network.lines.s_nom) * 100).data, index=array_line)
-
array_link = [['Link'] * len(network.links), network.links.index]
+ if network.lines_t.q0.empty:
+ load_lines = pd.Series((abs(network.lines_t.p0).max(
+ ) / (network.lines.s_nom) * 100).data, index=array_line)
+
+ else: load_lines = pd.Series(((network.lines_t.p0**2 +
+ network.lines_t.q0 ** 2).max().apply(sqrt)/
+ (network.lines.s_nom) * 100).data, index=array_line)
+
load_links = pd.Series((abs(network.links_t.p0.max(
- ) / (network.links.p_nom)) * 100).data, index=array_link)
+ ) / (network.links.p_nom)) * 100).data, index=array_link)
- load_hours = load_lines.append(load_links)
+ max_load = load_lines.append(load_links)
ll = network.plot(
- line_colors=load_hours,
+ line_colors=max_load,
line_cmap={
'Line': cmap_line,
'Link': cmap_link},
@@ -445,18 +603,18 @@ def max_load(network, boundaries=[0, 100], filename=None, two_cb=False):
line_widths=2)
if not boundaries:
- cb = plt.colorbar(ll[1])
- cb_Link = plt.colorbar(ll[2])
- elif boundaries:
- v1 = np.linspace(boundaries[0], boundaries[1], 101)
- v = np.linspace(boundaries[0], boundaries[1], 101)
- cb_Link = plt.colorbar(ll[2], boundaries=v1,
- ticks=v[0:101:10])
- cb_Link.set_clim(vmin=boundaries[0], vmax=boundaries[1])
+ boundaries = [min(max_load), max(max_load)]
- cb = plt.colorbar(ll[1], boundaries=v,
- ticks=v[0:101:10])
- cb.set_clim(vmin=boundaries[0], vmax=boundaries[1])
+ v = np.linspace(boundaries[0], boundaries[1], 101)
+
+ cb = plt.colorbar(ll[1], boundaries=v,
+ ticks=v[0:101:10])
+
+ cb.set_clim(vmin=boundaries[0], vmax=boundaries[1])
+
+ cb_Link = plt.colorbar(ll[2], boundaries=v,
+ ticks=v[0:101:10])
+ cb_Link.set_clim(vmin=boundaries[0], vmax=boundaries[1])
if two_cb:
# cb_Link.set_label('Maximum load of DC-lines %')
@@ -728,11 +886,10 @@ def plot_voltage(network, boundaries=[]):
def curtailment(network, carrier='solar', filename=None):
- p_by_carrier = network.generators_t.p.mul(network.snapshot_weightings,
- axis=0).groupby(network.generators.carrier, axis=1).sum()
+ p_by_carrier = network.generators_t.p.groupby\
+ (network.generators.carrier, axis=1).sum()
capacity = network.generators.groupby("carrier").sum().at[carrier, "p_nom"]
- p_available = network.generators_t.p_max_pu.mul(
- network.snapshot_weightings, axis=0).multiply(network.generators["p_nom"])
+ p_available = network.generators_t.p_max_pu.multiply(network.generators["p_nom"])
p_available_by_carrier = p_available.groupby(
network.generators.carrier, axis=1).sum()
p_curtailed_by_carrier = p_available_by_carrier - p_by_carrier
@@ -830,10 +987,10 @@ def storage_distribution(network, scaling=1, filename=None):
plt.close()
-def storage_expansion(network, scaling=1, filename=None):
+
+def storage_expansion(network, basemap=True, scaling=1, filename=None):
"""
Plot storage distribution as circles on grid nodes
-
Displays storage size and distribution in network.
Parameters
----------
@@ -844,19 +1001,30 @@ def storage_expansion(network, scaling=1, filename=None):
If not given, figure will be show directly
"""
- stores = network.storage_units[network.storage_units.carrier ==
+ stores = network.storage_units[network.storage_units.carrier ==
'extendable_storage']
- storage_distribution =\
- network.storage_units.p_nom_opt[stores.index].groupby(
- network.storage_units.bus).sum().reindex(network.buses.index,
- fill_value=0.)
+ batteries = stores[stores.max_hours == 6]
+ hydrogen = stores[stores.max_hours == 168]
+ storage_distribution = network.storage_units.p_nom_opt[stores.index].groupby(
+ network.storage_units.bus).sum().reindex(network.buses.index, fill_value=0.)
+ battery_distribution = network.storage_units.p_nom_opt[batteries.index].groupby(
+ network.storage_units.bus).sum().reindex(network.buses.index, fill_value=0.)
+ hydrogen_distribution = network.storage_units.p_nom_opt[hydrogen.index].groupby(
+ network.storage_units.bus).sum().reindex(network.buses.index, fill_value=0.)
+
+ sbatt = network.storage_units.index[
+ (network.storage_units.p_nom_opt > 1) & (network.storage_units.capital_cost > 10) & (
+ network.storage_units.max_hours == 6)]
+ shydr = network.storage_units.index[
+ (network.storage_units.p_nom_opt > 1) & (network.storage_units.capital_cost > 10) & (
+ network.storage_units.max_hours == 168)]
fig, ax = plt.subplots(1, 1)
fig.set_size_inches(6, 6)
-
+
msd_max = storage_distribution.max()
- msd_median = storage_distribution[storage_distribution != 0].median()
- msd_min = storage_distribution[storage_distribution > 1].min()
+ msd_max_bat = battery_distribution.max()
+ msd_max_hyd = hydrogen_distribution.max()
if msd_max != 0:
LabelVal = int(log10(msd_max))
@@ -864,32 +1032,48 @@ def storage_expansion(network, scaling=1, filename=None):
LabelVal = 0
if LabelVal < 0:
LabelUnit = 'kW'
- msd_max, msd_median, msd_min = msd_max * \
- 1000, msd_median * 1000, msd_min * 1000
- storage_distribution = storage_distribution * 1000
+ msd_max, msd_max_bat, msd_max_hyd = msd_max * \
+ 1000, msd_max_bat * 1000, msd_max_hyd * 1000
+ battery_distribution = battery_distribution * 1000
+ hydrogen_distribution = hydrogen_distribution * 1000
elif LabelVal < 3:
LabelUnit = 'MW'
else:
LabelUnit = 'GW'
- msd_max, msd_median, msd_min = msd_max / \
- 1000, msd_median / 1000, msd_min / 1000
- storage_distribution = storage_distribution / 1000
-
- if sum(storage_distribution) == 0:
- network.plot(bus_sizes=0, ax=ax, title="No extendable storage")
+ msd_max, msd_max_bat, msd_max_hyd = msd_max / \
+ 1000, msd_max_bat / 1000, msd_max_hyd / 1000
+ battery_distribution = battery_distribution / 1000
+ hydrogen_distribution = hydrogen_distribution / 1000
+
+ if network.storage_units.p_nom_opt[sbatt].sum() < 1 and network.storage_units.p_nom_opt[shydr].sum() < 1:
+ print("No storage unit to plot")
+ elif network.storage_units.p_nom_opt[sbatt].sum() > 1 and network.storage_units.p_nom_opt[shydr].sum() < 1:
+ network.plot(bus_sizes=battery_distribution * scaling, bus_colors='orangered', ax=ax, line_widths=0.3)
+ elif network.storage_units.p_nom_opt[sbatt].sum() < 1 and network.storage_units.p_nom_opt[shydr].sum() > 1:
+ network.plot(bus_sizes=hydrogen_distribution * scaling, bus_colors='teal', ax=ax, line_widths=0.3)
else:
- network.plot(
- bus_sizes=storage_distribution * scaling,
- ax=ax,
- line_widths=0.3,
- title="Storage expansion distribution")
-
- # Here we create a legend:
- # we'll plot empty lists with the desired size and label
- for area in [msd_max, msd_median, msd_min]:
- plt.scatter([], [], c='white', s=area * scaling,
- label='= ' + str(round(area, 0)) + LabelUnit + ' ')
- plt.legend(scatterpoints=1, labelspacing=1, title='Storage size')
+ network.plot(bus_sizes=battery_distribution * scaling, bus_colors='orangered', ax=ax, line_widths=0.3)
+ network.plot(bus_sizes=hydrogen_distribution * scaling, bus_colors='teal', ax=ax, line_widths=0.3)
+
+ if basemap and basemap_present:
+ x = network.buses["x"]
+ y = network.buses["y"]
+ x1 = min(x)
+ x2 = max(x)
+ y1 = min(y)
+ y2 = max(y)
+
+ bmap = Basemap(resolution='l', epsg=network.srid, llcrnrlat=y1, urcrnrlat=y2, llcrnrlon=x1, urcrnrlon=x2, ax=ax)
+ bmap.drawcountries()
+ bmap.drawcoastlines()
+
+ if msd_max_hyd !=0:
+ plt.scatter([], [], c='teal', s=msd_max_hyd * scaling,
+ label='= ' + str(round(msd_max_hyd, 0)) + LabelUnit + ' hydrogen storage')
+ if msd_max_bat !=0:
+ plt.scatter([], [], c='orangered', s=msd_max_bat * scaling,
+ label='= ' + str(round(msd_max_bat, 0)) + LabelUnit + ' battery storage')
+ plt.legend(scatterpoints=1, labelspacing=1, title='Storage size and technology', borderpad=1.3, loc=2)
if filename is None:
plt.show()
@@ -897,6 +1081,8 @@ def storage_expansion(network, scaling=1, filename=None):
plt.savefig(filename)
plt.close()
+ return
+
def gen_dist(
network,
@@ -1128,31 +1314,104 @@ def gen_dist(
def nodal_gen_dispatch(
network,
- scaling=False,
+ networkB=None,
techs=['wind_onshore', 'solar'],
+ item='energy',
+ direction=None,
+ scaling=1,
filename=None):
-
- gens = network.generators[network.generators.carrier.isin(techs)]
- dispatch =\
- network.generators_t.p[gens.index].mul(network.snapshot_weightings,
- axis=0).sum().groupby(
- [network.generators.bus, network.generators.carrier]).sum()
- colors = coloring()
-
- # network.generators.carrier.unique()}
- subcolors = {a: colors[a] for a in techs}
-
- if scaling is False:
- scaling = (1 / dispatch.max())
-
- fig, ax = plt.subplots(1, 1)
+ """
+ Plot nodal dispatch or capacity. If networkB is given, difference in
+ dispatch is plotted.
+
+ Parameters
+ ----------
+ network : PyPSA network container
+ Holds topology of grid including results from powerflow analysis
+ networkB : PyPSA network container
+ If given and item is 'energy', difference in dispatch between network
+ and networkB is plotted. If item is 'capacity', networkB is ignored.
+ default None
+ techs : None or list,
+ Techs to plot. If None, all techs are plotted.
+ default ['wind_onshore', 'solar']
+ item : str
+ Specifies the plotted item. Options are 'energy' and 'capacity'.
+ default 'energy'
+ direction : str
+ Only considered if networkB is given and item is 'energy'. Specifies
+ the direction of change in dispatch between network and networkB.
+ If 'positive', generation per tech which is higher in network than in
+ networkB is plotted.
+ If 'negative', generation per tech whcih is lower in network than
+ in networkB is plotted.
+ If 'absolute', total change per node is plotted.
+ Green nodes have higher dispatch in network than in networkB.
+ Red nodes have lower dispatch in network than in networkB.
+ default None
+ scaling : int
+ Scaling to change plot sizes.
+ default 1
+ filename : path to folder
+ """
+
+ if techs:
+ gens = network.generators[network.generators.carrier.isin(techs)]
+ elif techs is None:
+ gens = network.generators
+ techs = gens.carrier.unique()
+ if item == 'capacity':
+ dispatch = gens.p_nom.groupby([network.generators.bus,
+ network.generators.carrier]).sum()
+ elif item == 'energy':
+ if networkB:
+ dispatch_network =\
+ network.generators_t.p[gens.index].mul(
+ network.snapshot_weightings, axis=0).groupby(
+ [network.generators.bus, network.generators.carrier],
+ axis=1).sum()
+ dispatch_networkB =\
+ networkB.generators_t.p[gens.index].mul(
+ networkB.snapshot_weightings, axis=0).groupby(
+ [networkB.generators.bus, networkB.generators.carrier],
+ axis=1).sum()
+ dispatch = dispatch_network - dispatch_networkB
+
+ if direction == 'positive':
+ dispatch = dispatch[dispatch > 0].fillna(0)
+ elif direction == 'negative':
+ dispatch = dispatch[dispatch < 0].fillna(0)
+ elif direction == 'absolute':
+ pass
+ else:
+ return('No valid direction given.')
+ dispatch = dispatch.sum()
+
+ elif networkB is None:
+ dispatch =\
+ network.generators_t.p[gens.index].mul(
+ network.snapshot_weightings, axis=0).sum().groupby(
+ [network.generators.bus, network.generators.carrier]).sum()
+
+ fig, ax = plt.subplots(1, 1)
+ scaling = 1/(max(abs(dispatch.groupby(level=0).sum())))*scaling
+ if direction != 'absolute':
+ colors = coloring()
+ subcolors = {a: colors[a] for a in techs}
+ dispatch = dispatch.abs() + 1e-9
+ else:
+ dispatch = dispatch.sum(level=0)
+ colors = {s[0]: 'green' if s[1] > 0 else 'red'
+ for s in dispatch.iteritems()}
+ dispatch = dispatch.abs()
+ subcolors = {'negative': 'red', 'positive': 'green'}
+
network.plot(
- bus_sizes=dispatch *
- scaling,
- bus_colors=colors,
- line_widths=0.2,
- margin=0.01,
- ax=ax)
+ bus_sizes=dispatch * scaling,
+ bus_colors=colors,
+ line_widths=0.2,
+ margin=0.01,
+ ax=ax)
fig.subplots_adjust(right=0.8)
plt.subplots_adjust(wspace=0, hspace=0.001)
@@ -1163,6 +1422,7 @@ def nodal_gen_dispatch(
patchList.append(data_key)
ax.legend(handles=patchList, loc='upper left')
+ ax.autoscale()
if filename is None:
plt.show()
@@ -1172,6 +1432,61 @@ def nodal_gen_dispatch(
return
+def nodal_production_balance(
+ network,
+ snapshot='all',
+ scaling=0.00001,
+ filename=None):
+ """
+ Plots the nodal difference between generation and consumption.
+
+ Parameters
+ ----------
+ network : PyPSA network container
+ Holds topology of grid including results from powerflow analysis
+ snapshot : int or 'all'
+ Snapshot to plot.
+ default 'all'
+ scaling : int
+ Scaling to change plot sizes.
+ default 0.0001
+ filename : path to folder
+
+ """
+ fig, ax = plt.subplots(1, 1)
+ gen = network.generators_t.p.groupby(network.generators.bus, axis=1).sum()
+ load = network.loads_t.p.groupby(network.loads.bus, axis=1).sum()
+
+ if snapshot == 'all':
+ diff = (gen - load).sum()
+ else:
+ timestep = network.snapshots[snapshot]
+ diff = (gen - load).loc[timestep]
+
+ colors = {s[0]: 'green' if s[1] > 0 else 'red'
+ for s in diff.iteritems()}
+ subcolors = {'Net Consumer': 'red', 'Net Producer': 'green'}
+ diff = diff.abs()
+ network.plot(
+ bus_sizes=diff * scaling,
+ bus_colors=colors,
+ line_widths=0.2,
+ margin=0.01,
+ ax=ax)
+
+ patchList = []
+ for key in subcolors:
+ data_key = mpatches.Patch(color=subcolors[key], label=key)
+ patchList.append(data_key)
+ ax.legend(handles=patchList, loc='upper left')
+ ax.autoscale()
+ if filename:
+ plt.savefig(filename)
+ plt.close()
+
+ return
+
+
if __name__ == '__main__':
pass
diff --git a/etrago/tools/snapshot_clustering.py b/etrago/tools/snapshot_clustering.py
index c1dfc3ac..10335b37 100644
--- a/etrago/tools/snapshot_clustering.py
+++ b/etrago/tools/snapshot_clustering.py
@@ -21,12 +21,12 @@
"""
Please add Header!
"""
-
-import matplotlib.pyplot as plt
-import numpy as np
-from scipy.cluster.hierarchy import dendrogram
import pandas as pd
import os
+if 'READTHEDOCS' not in os.environ:
+ import matplotlib.pyplot as plt
+ import numpy as np
+ from scipy.cluster.hierarchy import dendrogram
###############################################################################
diff --git a/etrago/tools/utilities.py b/etrago/tools/utilities.py
index 7dd87384..1460b9b9 100644
--- a/etrago/tools/utilities.py
+++ b/etrago/tools/utilities.py
@@ -21,11 +21,30 @@
"""
Utilities.py includes a wide range of useful functions.
"""
-import pandas as pd
-import numpy as np
+
import os
import time
from pyomo.environ import (Var, Constraint, PositiveReals, ConcreteModel)
+import numpy as np
+import pandas as pd
+import pypsa
+import json
+import logging
+import math
+
+
+geopandas = True
+try:
+ import geopandas as gpd
+ from shapely.geometry import Point
+ import geoalchemy2
+ from egoio.db_tables.model_draft import RenpassGisParameterRegion
+
+except:
+ geopandas = False
+
+logger = logging.getLogger(__name__)
+
__copyright__ = ("Flensburg University of Applied Sciences, "
"Europa-Universität Flensburg, "
@@ -80,11 +99,123 @@ def buses_grid_linked(network, voltage_level):
return df.index
-def clip_foreign(network):
+def geolocation_buses(network, session):
"""
- Delete all components and timelines located outside of Germany.
- Add transborder flows divided by country of origin as
- network.foreign_trade.
+ If geopandas is installed:
+ Use Geometries of buses x/y(lon/lat) and Polygons
+ of Countries from RenpassGisParameterRegion
+ in order to locate the buses
+
+ Else:
+ Use x/y coordinats to locate foreign buses
+
+ Parameters
+ ----------
+ network_etrago: : class: `etrago.tools.io.NetworkScenario`
+ eTraGo network object compiled by: meth: `etrago.appl.etrago`
+ session: : sqlalchemy: `sqlalchemy.orm.session.Session < orm/session_basics.html >`
+ SQLAlchemy session to the OEDB
+
+ """
+ if geopandas:
+ # ToDo: check eTrago stack generation plots and other in order of adaptation
+ # Start db connetion
+ # get renpassG!S scenario data
+
+ RenpassGISRegion = RenpassGisParameterRegion
+
+ # Define regions
+ region_id = ['DE', 'DK', 'FR', 'BE', 'LU', 'AT',
+ 'NO', 'PL', 'CH', 'CZ', 'SE', 'NL']
+
+ query = session.query(RenpassGISRegion.gid,
+ RenpassGISRegion.u_region_id,
+ RenpassGISRegion.stat_level,
+ RenpassGISRegion.geom,
+ RenpassGISRegion.geom_point)
+
+ # get regions by query and filter
+ Regions = [(gid, u_region_id, stat_level, geoalchemy2.shape.to_shape(
+ geom),geoalchemy2.shape.to_shape(geom_point))
+ for gid, u_region_id, stat_level,
+ geom, geom_point in query.filter(RenpassGISRegion.u_region_id.
+ in_(region_id)).all()]
+
+ crs = {'init': 'epsg:4326'}
+ # transform lon lat to shapely Points and create GeoDataFrame
+ points = [Point(xy) for xy in zip(network.buses.x, network.buses.y)]
+ bus = gpd.GeoDataFrame(network.buses, crs=crs, geometry=points)
+ # Transform Countries Polygons as Regions
+ region = pd.DataFrame(
+ Regions, columns=['id', 'country', 'stat_level', 'Polygon',
+ 'Point'])
+ re = gpd.GeoDataFrame(region, crs=crs, geometry=region['Polygon'])
+ # join regions and buses by geometry which intersects
+ busC = gpd.sjoin(bus, re, how='inner', op='intersects')
+ # busC
+ # Drop non used columns
+ busC = busC.drop(['index_right', 'Point', 'id', 'Polygon',
+ 'stat_level', 'geometry'], axis=1)
+ # add busC to eTraGo.buses
+ network.buses['country_code'] = busC['country']
+ network.buses.country_code[network.buses.country_code.isnull()] = 'DE'
+ # close session
+ session.close()
+
+ else:
+
+ buses_by_country(network)
+
+ transborder_lines_0 = network.lines[network.lines['bus0'].isin(
+ network.buses.index[network.buses['country_code'] != 'DE'])].index
+ transborder_lines_1 = network.lines[network.lines['bus1'].isin(
+ network.buses.index[network.buses['country_code']!= 'DE'])].index
+
+ #set country tag for lines
+ network.lines.loc[transborder_lines_0, 'country'] = \
+ network.buses.loc[network.lines.loc[transborder_lines_0, 'bus0'].\
+ values,'country_code'].values
+
+ network.lines.loc[transborder_lines_1, 'country'] = \
+ network.buses.loc[network.lines.loc[transborder_lines_1, 'bus1'].\
+ values,'country_code'].values
+ network.lines['country'].fillna('DE', inplace=True)
+ doubles = list(set(transborder_lines_0.intersection(transborder_lines_1)))
+ for line in doubles:
+ c_bus0 = network.buses.loc[network.lines.loc[line, 'bus0'],
+ 'country_code']
+ c_bus1 = network.buses.loc[network.lines.loc[line, 'bus1'],
+ 'country_code']
+ network.lines.loc[line, 'country'] = '{}{}'.format(c_bus0, c_bus1)
+
+ transborder_links_0 = network.links[network.links['bus0'].isin(
+ network.buses.index[network.buses['country_code']!= 'DE'])].index
+ transborder_links_1 = network.links[network.links['bus1'].isin(
+ network.buses.index[network.buses['country_code'] != 'DE'])].index
+
+ #set country tag for links
+ network.links.loc[transborder_links_0, 'country'] = \
+ network.buses.loc[network.links.loc[transborder_links_0, 'bus0'].\
+ values, 'country_code'].values
+
+ network.links.loc[transborder_links_1, 'country'] = \
+ network.buses.loc[network.links.loc[transborder_links_1, 'bus1'].\
+ values, 'country_code'].values
+ network.links['country'].fillna('DE', inplace=True)
+ doubles = list(set(transborder_links_0.intersection(transborder_links_1)))
+ for link in doubles:
+ c_bus0 = network.buses.loc[
+ network.links.loc[link, 'bus0'], 'country_code']
+ c_bus1 = network.buses.loc[
+ network.links.loc[link, 'bus1'], 'country_code']
+ network.links.loc[link, 'country'] = '{}{}'.format(c_bus0, c_bus1)
+
+ return network
+
+
+def buses_by_country(network):
+ """
+ Find buses of foreign countries and return them as Pandas Series
Parameters
----------
@@ -93,18 +224,16 @@ def clip_foreign(network):
Returns
-------
- network : :class:`pypsa.Network
- Overall container of PyPSA
+ foreign_buses: Series containing buses by country
"""
- # get foreign buses by country
poland = pd.Series(index=network.
buses[(network.buses['x'] > 17)].index,
- data="Poland")
+ data="PL")
czech = pd.Series(index=network.
buses[(network.buses['x'] < 17) &
(network.buses['x'] > 15.1)].index,
- data="Czech")
+ data="CZ")
denmark = pd.Series(index=network.
buses[((network.buses['y'] < 60) &
(network.buses['y'] > 55.2)) |
@@ -112,9 +241,9 @@ def clip_foreign(network):
(network.buses['x'] < 11.97) &
(network.buses['y'] > 54.5))].
index,
- data="Denmark")
+ data="DK")
sweden = pd.Series(index=network.buses[(network.buses['y'] > 60)].index,
- data="Sweden")
+ data="SE")
austria = pd.Series(index=network.
buses[(network.buses['y'] < 47.33) &
(network.buses['x'] > 9) |
@@ -128,7 +257,7 @@ def clip_foreign(network):
(network.buses['y'] < 47.58)) |
(network.buses['y'] < 47.6) &
(network.buses['x'] > 14.1)].index,
- data="Austria")
+ data="AT")
switzerland = pd.Series(index=network.
buses[((network.buses['x'] > 8.1) &
(network.buses['x'] < 8.3) &
@@ -141,19 +270,19 @@ def clip_foreign(network):
(network.buses['x'] < 10.92) &
(network.buses['y'] > 49.91) &
(network.buses['y'] < 49.92))].index,
- data="Switzerland")
+ data="CH")
netherlands = pd.Series(index=network.
buses[((network.buses['x'] < 6.96) &
(network.buses['y'] < 53.15) &
(network.buses['y'] > 53.1)) |
((network.buses['x'] < 5.4) &
(network.buses['y'] > 52.1))].index,
- data="Netherlands")
+ data="NL")
luxembourg = pd.Series(index=network.
buses[((network.buses['x'] < 6.15) &
(network.buses['y'] < 49.91) &
(network.buses['y'] > 49.65))].index,
- data="Luxembourg")
+ data="LU")
france = pd.Series(index=network.
buses[(network.buses['x'] < 4.5) |
((network.buses['x'] > 7.507) &
@@ -168,18 +297,44 @@ def clip_foreign(network):
(network.buses['x'] < 6.76) &
(network.buses['y'] > 49.13) &
(network.buses['y'] < 49.16))].index,
- data="France")
+ data="FR")
foreign_buses = pd.Series()
foreign_buses = foreign_buses.append([poland, czech, denmark, sweden,
austria, switzerland,
netherlands, luxembourg, france])
+ network.buses['country_code'] = foreign_buses[network.buses.index]
+ network.buses['country_code'].fillna('DE', inplace=True)
+
+ return foreign_buses
+
+
+def clip_foreign(network):
+ """
+ Delete all components and timelines located outside of Germany.
+ Add transborder flows divided by country of origin as
+ network.foreign_trade.
+
+ Parameters
+ ----------
+ network : :class:`pypsa.Network
+ Overall container of PyPSA
+
+ Returns
+ -------
+ network : :class:`pypsa.Network
+ Overall container of PyPSA
+ """
+
+ # get foreign buses by country
+
+ foreign_buses = network.buses[network.buses.country_code != 'DE']
network.buses = network.buses.drop(
network.buses.loc[foreign_buses.index].index)
# identify transborder lines (one bus foreign, one bus not) and the country
# it is coming from
- transborder_lines = pd.DataFrame(index=network.lines[
+ """transborder_lines = pd.DataFrame(index=network.lines[
((network.lines['bus0'].isin(network.buses.index) == False) &
(network.lines['bus1'].isin(network.buses.index) == True)) |
((network.lines['bus0'].isin(network.buses.index) == True) &
@@ -203,12 +358,17 @@ def clip_foreign(network):
i)] = transborder_flows.loc[:, str(i)]*-1
network.foreign_trade = transborder_flows.\
- groupby(transborder_lines['country'], axis=1).sum()
+ groupby(transborder_lines['country'], axis=1).sum()"""
# drop foreign components
network.lines = network.lines.drop(network.lines[
(network.lines['bus0'].isin(network.buses.index) == False) |
(network.lines['bus1'].isin(network.buses.index) == False)].index)
+
+ network.links = network.links.drop(network.links[
+ (network.links['bus0'].isin(network.buses.index) == False) |
+ (network.links['bus1'].isin(network.buses.index) == False)].index)
+
network.transformers = network.transformers.drop(network.transformers[
(network.transformers['bus0'].isin(network.buses.index) == False) |
(network.transformers['bus1'].isin(network.
@@ -221,7 +381,8 @@ def clip_foreign(network):
(network.storage_units['bus'].isin(network.
buses.index) == False)].index)
- components = ['loads', 'generators', 'lines', 'buses', 'transformers']
+ components = ['loads', 'generators', 'lines', 'buses', 'transformers',
+ 'links']
for g in components: # loads_t
h = g + '_t'
nw = getattr(network, h) # network.loads_t
@@ -234,6 +395,80 @@ def clip_foreign(network):
return network
+def foreign_links(network):
+ """
+ Change transmission technology of foreign lines from AC to DC (links).
+ Parameters
+ ----------
+ network : :class:`pypsa.Network
+ Overall container of PyPSA
+
+ Returns
+ -------
+ network : :class:`pypsa.Network
+ Overall container of PyPSA
+
+ """
+ foreign_buses = network.buses[network.buses.country_code != 'DE']
+
+ foreign_lines = network.lines[network.lines.bus0.astype(str).isin(
+ foreign_buses.index) | network.lines.bus1.astype(str).isin(
+ foreign_buses.index)]
+
+ foreign_links = network.links[network.links.bus0.astype(str).isin(
+ foreign_buses.index) | network.links.bus1.astype(str).isin(
+ foreign_buses.index)]
+
+ network.links = network.links.drop(
+ network.links.index[network.links.index.isin(foreign_links.index)
+ & network.links.bus0.isin(network.links.bus1) &
+ (network.links.bus0 > network.links.bus1)])
+
+ foreign_links = network.links[network.links.bus0.astype(str).isin(
+ foreign_buses.index) | network.links.bus1.astype(str).isin(
+ foreign_buses.index)]
+
+ network.links.loc[foreign_links.index, 'p_min_pu'] = -1
+
+ network.links.loc[foreign_links.index, 'efficiency'] = 1
+
+ network.import_components_from_dataframe(
+ foreign_lines.loc[:, ['bus0', 'bus1', 'capital_cost', 'length']]
+ .assign(p_nom=foreign_lines.s_nom).assign(p_min_pu=-1)
+ .set_index('N' + foreign_lines.index),
+ 'Link')
+
+ network.lines = network.lines.drop(foreign_lines.index)
+
+ return network
+
+
+def set_q_foreign_loads(network, cos_phi=1):
+ """
+ Set reative power timeseries of loads in neighbouring countries
+ ----------
+ network : :class:`pypsa.Network
+ Overall container of PyPSA
+
+ Returns
+ -------
+ network : :class:`pypsa.Network
+ Overall container of PyPSA
+
+ """
+ foreign_buses = network.buses[network.buses.country_code != 'DE']
+
+ network.loads_t['q_set'][network.loads.index[
+ network.loads.bus.astype(str).isin(foreign_buses.index)]] = \
+ network.loads_t['p_set'][network.loads.index[
+ network.loads.bus.astype(str).isin(
+ foreign_buses.index)]] * math.tan(math.acos(cos_phi))
+
+ network.generators.control[network.generators.control == 'PQ'] = 'PV'
+
+ return network
+
+
def connected_grid_lines(network, busids):
""" Get grid lines connected to given buses.
@@ -300,8 +535,8 @@ def load_shedding(network, **kwargs):
network.add("Carrier", "load")
start = network.generators.index.to_series().str.rsplit(
- ' ').str[0].astype(int).sort_values().max()+1
- index = list(range(start, start+len(network.buses.index)))
+ ' ').str[0].astype(int).sort_values().max() + 1
+ index = list(range(start, start + len(network.buses.index)))
network.import_components_from_dataframe(
pd.DataFrame(
dict(marginal_cost=marginal_cost,
@@ -319,9 +554,9 @@ def data_manipulation_sh(network):
from geoalchemy2.shape import from_shape, to_shape
# add connection from Luebeck to Siems
- new_bus = str(network.buses.index.astype(np.int64).max()+1)
- new_trafo = str(network.transformers.index.astype(np.int64).max()+1)
- new_line = str(network.lines.index.astype(np.int64).max()+1)
+ new_bus = str(network.buses.index.astype(np.int64).max() + 1)
+ new_trafo = str(network.transformers.index.astype(np.int64).max() + 1)
+ new_line = str(network.lines.index.astype(np.int64).max() + 1)
network.add("Bus", new_bus, carrier='AC',
v_nom=220, x=10.760835, y=53.909745)
network.add("Transformer", new_trafo, bus0="25536",
@@ -354,9 +589,17 @@ def data_manipulation_sh(network):
return
-def results_to_csv(network, path):
+def _enumerate_row(row):
+ row['name'] = row.name
+ return row
+
+
+def results_to_csv(network, args, pf_solution=None):
"""
"""
+
+ path = args['results']
+
if path == False:
return None
@@ -366,7 +609,14 @@ def results_to_csv(network, path):
network.export_to_csv_folder(path)
data = pd.read_csv(os.path.join(path, 'network.csv'))
data['time'] = network.results['Solver'].Time
- data.to_csv(os.path.join(path, 'network.csv'))
+ data = data.apply(_enumerate_row, axis=1)
+ data.to_csv(os.path.join(path, 'network.csv'), index=False)
+
+ with open(os.path.join(path, 'args.json'), 'w') as fp:
+ json.dump(args, fp)
+
+ if not isinstance(pf_solution, type(None)):
+ pf_solution.to_csv(os.path.join(path, 'pf_solution.csv'), index=True)
if hasattr(network, 'Z'):
file = [i for i in os.listdir(
@@ -374,7 +624,7 @@ def results_to_csv(network, path):
if file:
print('Z already calculated')
else:
- network.Z.to_csv(path.strip('0123456789')+'/Z.csv', index=False)
+ network.Z.to_csv(path.strip('0123456789') + '/Z.csv', index=False)
return
@@ -384,43 +634,43 @@ def parallelisation(network, start_snapshot, end_snapshot, group_size,
print("Performing linear OPF, {} snapshot(s) at a time:".
format(group_size))
- x = time.time()
+ t = time.time()
- for i in range(int((end_snapshot-start_snapshot+1)/group_size)):
+ for i in range(int((end_snapshot - start_snapshot + 1) / group_size)):
if i > 0:
network.storage_units.state_of_charge_initial = network.\
storage_units_t.state_of_charge.loc[
- network.snapshots[group_size*i-1]]
- network.lopf(network.snapshots[group_size*i:group_size*i+group_size],
+ network.snapshots[group_size * i - 1]]
+ network.lopf(network.snapshots[
+ group_size * i:group_size * i + group_size],
solver_name=solver_name,
solver_options=solver_options,
extra_functionality=extra_functionality)
network.lines.s_nom = network.lines.s_nom_opt
- y = time.time()
- z = (y - x) / 60
- print(z)
+ print(time.time() - t / 60)
return
-
-def pf_post_lopf(network, scenario):
-
- network_pf = network
-
- # For the PF, set the P to the optimised P
- network_pf.generators_t.p_set = network_pf.generators_t.p_set.reindex(
- columns=network_pf.generators.index)
- network_pf.generators_t.p_set = network_pf.generators_t.p
-
+def set_slack(network):
old_slack = network.generators.index[network.
generators.control == 'Slack'][0]
+ # check if old slack was PV or PQ control:
+ if network.generators.p_nom[old_slack] > 50 and network.generators.\
+ carrier[old_slack] in ('solar', 'wind'):
+ old_control = 'PQ'
+ elif network.generators.p_nom[old_slack] > 50 and network.generators.\
+ carrier[old_slack] not in ('solar', 'wind'):
+ old_control = 'PV'
+ elif network.generators.p_nom[old_slack] < 50:
+ old_control = 'PQ'
+
old_gens = network.generators
gens_summed = network.generators_t.p.sum()
old_gens['p_summed'] = gens_summed
max_gen_buses_index = old_gens.groupby(['bus']).agg(
{'p_summed': np.sum}).p_summed.sort_values().index
- for bus_iter in range(1, len(max_gen_buses_index)-1):
+ for bus_iter in range(1, len(max_gen_buses_index) - 1):
if old_gens[(network.
generators['bus'] == max_gen_buses_index[-bus_iter]) &
(network.generators['control'] == 'PV')].empty:
@@ -434,25 +684,237 @@ def pf_post_lopf(network, scenario):
p_nom[(network.generators['bus'] == new_slack_bus) & (
network.generators['control'] == 'PV')].sort_values().index[-1]
- # check if old slack was PV or PQ control:
- if network.generators.p_nom[old_slack] > 50 and network.generators.\
- carrier[old_slack] in ('solar', 'wind'):
- old_control = 'PQ'
- elif network.generators.p_nom[old_slack] > 50 and network.generators.\
- carrier[old_slack] not in ('solar', 'wind'):
- old_control = 'PV'
- elif network.generators.p_nom[old_slack] < 50:
- old_control = 'PQ'
-
network.generators = network.generators.set_value(
old_slack, 'control', old_control)
network.generators = network.generators.set_value(
new_slack_gen, 'control', 'Slack')
+
+ return network
+
+
+def pf_post_lopf(network, args, extra_functionality, add_foreign_lopf):
+
+ network_pf = network
+
+ # Update x of extended lines and transformers
+ if network_pf.lines.s_nom_extendable.any() or \
+ network_pf.transformers.s_nom_extendable.any():
+
+ storages_extendable = network_pf.storage_units.p_nom_extendable.copy()
+ lines_extendable = network_pf.lines.s_nom_extendable.copy()
+ links_extendable = network_pf.links.p_nom_extendable.copy()
+ trafos_extendable = network_pf.transformers.s_nom_extendable.copy()
+
+ storages_p_nom = network_pf.storage_units.p_nom.copy()
+ lines_s_nom= network_pf.lines.s_nom.copy()
+ links_p_nom = network_pf.links.p_nom.copy()
+ trafos_s_nom = network_pf.transformers.s_nom.copy()
+
+ network_pf.lines.x[network.lines.s_nom_extendable] = \
+ network_pf.lines.x * network.lines.s_nom /\
+ network_pf.lines.s_nom_opt
+
+ network_pf.lines.r[network.lines.s_nom_extendable] = \
+ network_pf.lines.r * network.lines.s_nom /\
+ network_pf.lines.s_nom_opt
+
+ network_pf.lines.b[network.lines.s_nom_extendable] = \
+ network_pf.lines.b * network.lines.s_nom_opt /\
+ network_pf.lines.s_nom
+
+ network_pf.lines.g[network.lines.s_nom_extendable] = \
+ network_pf.lines.g * network.lines.s_nom_opt /\
+ network_pf.lines.s_nom
+
+ network_pf.transformers.x[network.transformers.s_nom_extendable] = \
+ network_pf.transformers.x * network.transformers.s_nom / \
+ network_pf.transformers.s_nom_opt
+
+ network_pf.lines.s_nom_extendable = False
+ network_pf.transformers.s_nom_extendable = False
+ network_pf.storage_units.p_nom_extendable = False
+ network_pf.links.p_nom_extendable = False
+ network_pf.lines.s_nom = network.lines.s_nom_opt
+ network_pf.transformers.s_nom = network.transformers.s_nom_opt
+ network_pf.storage_units.p_nom = network_pf.storage_units.p_nom_opt
+ network_pf.links.p_nom = network_pf.links.p_nom_opt
+
+ network_pf.lopf(network.snapshots,
+ solver_name=args['solver'],
+ solver_options=args['solver_options'],
+ extra_functionality=extra_functionality)
+
+ network_pf.storage_units.p_nom_extendable = storages_extendable
+ network_pf.lines.s_nom_extendable = lines_extendable
+ network_pf.links.p_nom_extendable = links_extendable
+ network_pf.transformers.s_nom_extendable = trafos_extendable
+
+ network_pf.storage_units.p_nom = storages_p_nom
+ network_pf.lines.s_nom = lines_s_nom
+ network_pf.links.p_nom = links_p_nom
+ network_pf.transformers.s_nom = trafos_s_nom
+
+ # For the PF, set the P to the optimised P
+ network_pf.generators_t.p_set = network_pf.generators_t.p_set.reindex(
+ columns=network_pf.generators.index)
+ network_pf.generators_t.p_set = network_pf.generators_t.p
+
+ network_pf.storage_units_t.p_set = network_pf.storage_units_t.p_set\
+ .reindex(columns=network_pf.storage_units.index)
+ network_pf.storage_units_t.p_set = network_pf.storage_units_t.p
+
+ network_pf.links_t.p_set = network_pf.links_t.p_set.reindex(
+ columns=network_pf.links.index)
+ network_pf.links_t.p_set = network_pf.links_t.p0
+
+ # if foreign lines are DC, execute pf only on sub_network in Germany
+ if args['foreign_lines']['carrier'] == 'DC':
+ n_bus = pd.Series(index=network.sub_networks.index)
+
+ for i in range(0, len(network.sub_networks.index)-1):
+ n_bus[i] = len(network.buses.index[
+ network.buses.sub_network.astype(int) == i])
+
+ sub_network_DE = n_bus.index[n_bus == n_bus.max()]
+
+ foreign_bus = network.buses[network.buses.sub_network !=
+ sub_network_DE.values[0]]
+
+ foreign_comp = {'Bus': network.buses[
+ network.buses.sub_network !=
+ sub_network_DE.values[0]],
+ 'Generator': network.generators[
+ network.generators.bus.isin(
+ foreign_bus.index)],
+ 'Load': network.loads[
+ network.loads.bus.isin(foreign_bus.index)],
+ 'Transformer': network.transformers[
+ network.transformers.bus0.isin(
+ foreign_bus.index)],
+ 'StorageUnit': network.storage_units[
+ network.storage_units.bus.isin(
+ foreign_bus.index)]}
+
+ foreign_series = {'Bus': network.buses_t.copy(),
+ 'Generator': network.generators_t.copy(),
+ 'Load': network.loads_t.copy(),
+ 'Transformer': network.transformers_t.copy(),
+ 'StorageUnit': network.storage_units_t.copy()}
+
+ for comp in sorted(foreign_series):
+ attr = sorted(foreign_series[comp])
+ for a in attr:
+ if not foreign_series[comp][a].empty:
+ if a != 'p_max_pu':
+ foreign_series[comp][a] = foreign_series[comp][a][
+ foreign_comp[comp].index]
+
+ else:
+ foreign_series[comp][a] = foreign_series[comp][a][
+ foreign_comp[comp][foreign_comp[
+ comp]['carrier'].isin(
+ ['solar', 'wind_onshore',
+ 'wind_offshore',
+ 'run_of_river'])].index]
+
+ network.buses = network.buses.drop(foreign_bus.index)
+ network.generators = network.generators[
+ network.generators.bus.isin(network.buses.index)]
+ network.loads = network.loads[
+ network.loads.bus.isin(network.buses.index)]
+ network.transformers = network.transformers[
+ network.transformers.bus0.isin(network.buses.index)]
+ network.storage_units = network.storage_units[
+ network.storage_units.bus.isin(network.buses.index)]
+
+ # Set slack bus
+ network = set_slack(network)
# execute non-linear pf
- network_pf.pf(scenario.timeindex, use_seed=True)
+ pf_solution = network_pf.pf(network.snapshots, use_seed=True)
+
+ # if selected, copy lopf results of neighboring countries to network
+ if (args['foreign_lines']['carrier'] == 'DC') & add_foreign_lopf:
+ for comp in sorted(foreign_series):
+ network.import_components_from_dataframe(foreign_comp[comp], comp)
+
+ for attr in sorted(foreign_series[comp]):
+ network.import_series_from_dataframe(foreign_series
+ [comp][attr], comp, attr)
+
+ pf_solve = pd.DataFrame(index=pf_solution['converged'].index)
+ pf_solve['converged'] = pf_solution['converged'].values
+ pf_solve['error'] = pf_solution['error'].values
+ pf_solve['n_iter'] = pf_solution['n_iter'].values
+
+ if not pf_solve[~pf_solve.converged].count().max() == 0:
+ logger.warning("PF of %d snapshots not converged.",
+ pf_solve[~pf_solve.converged].count().max())
+
+ return pf_solve
+
+
+def distribute_q(network, allocation='p_nom'):
+
+ network.allocation = allocation
+ if allocation == 'p':
+ p_sum = network.generators_t['p'].\
+ groupby(network.generators.bus, axis=1).sum().\
+ add(network.storage_units_t['p'].abs().groupby(
+ network.storage_units.bus, axis=1).sum(), fill_value=0)
+ q_sum = network.generators_t['q'].\
+ groupby(network.generators.bus, axis=1).sum()
+
+ q_distributed = network.generators_t.p / \
+ p_sum[network.generators.bus.sort_index()].values * \
+ q_sum[network.generators.bus.sort_index()].values
+
+ q_storages = network.storage_units_t.p / \
+ p_sum[network.storage_units.bus.sort_index()].values *\
+ q_sum[network.storage_units.bus.sort_index()].values
+
+ if allocation == 'p_nom':
+
+ q_bus = network.generators_t['q'].\
+ groupby(network.generators.bus, axis=1).sum().add(
+ network.storage_units_t.q.groupby(
+ network.storage_units.bus, axis = 1).sum(), fill_value=0)
+
+ p_nom_dist = network.generators.p_nom_opt.sort_index()
+ p_nom_dist[p_nom_dist.index.isin(network.generators.index
+ [network.generators.carrier ==
+ 'load shedding'])] = 0
+
+ q_distributed = q_bus[
+ network.generators.bus].multiply(p_nom_dist.values) /\
+ (network.generators.p_nom_opt[network.generators.carrier !=
+ 'load shedding'].groupby(
+ network.generators.bus).sum().add(
+ network.storage_units.p_nom_opt.groupby
+ (network.storage_units.bus).sum(), fill_value=0))[
+ network.generators.bus.sort_index()].values
+
+ q_distributed.columns = network.generators.index
+
+ q_storages = q_bus[network.storage_units.bus]\
+ .multiply(network.storage_units.p_nom_opt.values) / \
+ ((network.generators.p_nom_opt[network.generators.carrier !=
+ 'load shedding'].groupby(
+ network.generators.bus).sum().add(
+ network.storage_units.p_nom_opt.
+ groupby(network.storage_units.bus).sum(), fill_value=0))[
+ network.storage_units.bus].values)
+
+ q_storages.columns = network.storage_units.index
+
+ q_distributed[q_distributed.isnull()] = 0
+ q_distributed[q_distributed.abs() == np.inf] = 0
+ q_storages[q_storages.isnull()] = 0
+ q_storages[q_storages.abs() == np.inf] = 0
+ network.generators_t.q = q_distributed
+ network.storage_units_t.q = q_storages
- return network_pf
+ return network
def calc_line_losses(network):
@@ -491,7 +953,7 @@ def calc_line_losses(network):
# Crastan, Elektrische Energieversorgung, p.151
# trafo 1000 MVA: 99.8 %
network.transformers = network.transformers.assign(
- losses=np.multiply(network.transformers.s_nom, (1-0.998)).values)
+ losses=np.multiply(network.transformers.s_nom, (1 - 0.998)).values)
# calculate total losses (possibly enhance with adding these values
# to network container)
@@ -582,7 +1044,7 @@ def group_parallel_lines(network):
return
-def set_line_costs(network, cost110=230, cost220=290, cost380=85):
+def set_line_costs(network, cost110=230, cost220=290, cost380=85, costDC=375):
""" Set capital costs for extendable lines in respect to PyPSA [€/MVA]
----------
network : :class:`pypsa.Network
@@ -596,20 +1058,22 @@ def set_line_costs(network, cost110=230, cost220=290, cost380=85):
cost380 : capital costs per km for 380kV lines and cables
default: 85€/MVA/km, source: costs for extra circuit in
NEP 2025, capactity from most used 380 kV lines in NEP
+ costDC : capital costs per km for DC-lines
+ default: 375€/MVA/km, source: costs for DC transmission line
+ in NEP 2035
-------
"""
network.lines["v_nom"] = network.lines.bus0.map(network.buses.v_nom)
- network.lines.loc[(network.lines.v_nom == 110) & network.lines.
- s_nom_extendable,
+ network.lines.loc[(network.lines.v_nom == 110),
'capital_cost'] = cost110 * network.lines.length
- network.lines.loc[(network.lines.v_nom == 220) & network.lines.
- s_nom_extendable,
+ network.lines.loc[(network.lines.v_nom == 220),
'capital_cost'] = cost220 * network.lines.length
- network.lines.loc[(network.lines.v_nom == 380) & network.lines.
- s_nom_extendable,
+ network.lines.loc[(network.lines.v_nom == 380),
'capital_cost'] = cost380 * network.lines.length
+ network.links.loc[network.links.p_nom_extendable,
+ 'capital_cost'] = costDC * network.links.length
return network
@@ -636,18 +1100,193 @@ def set_trafo_costs(network, cost110_220=7500, cost110_380=17333,
network.buses.v_nom)
network.transformers.loc[(network.transformers.v_nom0 == 110) & (
- network.transformers.v_nom1 == 220) & network.transformers.
- s_nom_extendable, 'capital_cost'] = cost110_220
+ network.transformers.v_nom1 == 220), 'capital_cost'] = cost110_220
network.transformers.loc[(network.transformers.v_nom0 == 110) & (
- network.transformers.v_nom1 == 380) & network.transformers.
- s_nom_extendable, 'capital_cost'] = cost110_380
+ network.transformers.v_nom1 == 380), 'capital_cost'] = cost110_380
network.transformers.loc[(network.transformers.v_nom0 == 220) & (
- network.transformers.v_nom1 == 380) & network.transformers.
- s_nom_extendable, 'capital_cost'] = cost220_380
+ network.transformers.v_nom1 == 380), 'capital_cost'] = cost220_380
return network
+def add_missing_components(network):
+ # Munich
+ '''
+ add missing transformer at Heizkraftwerk Nord in Munich:
+ https://www.swm.de/privatkunden/unternehmen/energieerzeugung/heizkraftwerke.html?utm_medium=301
+@@ -329,27 +334,28 @@
+ to bus 25096:
+ 25369 (86)
+ 28232 (24)
+ 25353 to 25356 (79)
+ to bus 23822: (110kV bus of 380/110-kV-transformer)
+ 25355 (90)
+ 28212 (98)
+
+ 25357 to 665 (85)
+ 25354 to 27414 (30)
+ 27414 to 28212 (33)
+ 25354 to 28294 (32/63)
+ 28335 to 28294 (64)
+ 28335 to 28139 (28)
+ Overhead lines:
+ 16573 to 24182 (part of 4)
+ '''
+ """
+ Installierte Leistung der Umspannungsebene Höchst- zu Hochspannung
+ (380 kV / 110 kV): 2.750.000 kVA
+ https://www.swm-infrastruktur.de/strom/netzstrukturdaten/strukturmerkmale.html
+ """
+ new_trafo = str(network.transformers.index.astype(int).max() + 1)
+
+ network.add("Transformer", new_trafo, bus0="23648", bus1="16573",
+ x=0.135 / (2750 / 2),
+ r=0.0, tap_ratio=1, s_nom=2750 / 2)
+
+ def add_110kv_line(bus0, bus1, overhead=False):
+ new_line = str(network.lines.index.astype(int).max() + 1)
+ if not overhead:
+ network.add("Line", new_line, bus0=bus0, bus1=bus1, s_nom=280)
+ else:
+ network.add("Line", new_line, bus0=bus0, bus1=bus1, s_nom=260)
+ network.lines.loc[new_line, "scn_name"] = "Status Quo"
+ network.lines.loc[new_line, "v_nom"] = 110
+ network.lines.loc[new_line, "version"] = "added_manually"
+ network.lines.loc[new_line, "frequency"] = 50
+ network.lines.loc[new_line, "cables"] = 3.0
+ network.lines.loc[new_line, "country"] = 'DE'
+ network.lines.loc[new_line, "length"] = (
+ pypsa.geo.haversine(network.buses.loc[bus0, ["x", "y"]],
+ network.buses.loc[bus1, ["x", "y"]])
+ [0][0] * 1.2)
+ if not overhead:
+ network.lines.loc[new_line, "r"] = (network.lines.
+ loc[new_line, "length"] *
+ 0.0177)
+ network.lines.loc[new_line, "g"] = 0
+ # or: (network.lines.loc[new_line, "length"]*78e-9)
+ network.lines.loc[new_line, "x"] = (network.lines.
+ loc[new_line, "length"] *
+ 0.3e-3)
+ network.lines.loc[new_line, "b"] = (network.lines.
+ loc[new_line, "length"] *
+ 250e-9)
+
+ elif overhead:
+ network.lines.loc[new_line, "r"] = (network.lines.
+ loc[new_line, "length"] *
+ 0.05475)
+ network.lines.loc[new_line, "g"] = 0
+ # or: (network.lines.loc[new_line, "length"]*40e-9)
+ network.lines.loc[new_line, "x"] = (network.lines.
+ loc[new_line, "length"] *
+ 1.2e-3)
+ network.lines.loc[new_line, "b"] = (network.lines.
+ loc[new_line, "length"] *
+ 9.5e-9)
+
+ add_110kv_line("16573", "28353")
+ add_110kv_line("16573", "28092")
+ add_110kv_line("25096", "25369")
+ add_110kv_line("25096", "28232")
+ add_110kv_line("25353", "25356")
+ add_110kv_line("23822", "25355")
+ add_110kv_line("23822", "28212")
+ add_110kv_line("25357", "665")
+ add_110kv_line("25354", "27414")
+ add_110kv_line("27414", "28212")
+ add_110kv_line("25354", "28294")
+ add_110kv_line("28335", "28294")
+ add_110kv_line("28335", "28139")
+ add_110kv_line("16573", "24182", overhead=True)
+
+ # Stuttgart
+ """
+ Stuttgart:
+ Missing transformer, because 110-kV-bus is situated outside
+ Heizkraftwerk Heilbronn:
+ """
+ # new_trafo = str(network.transformers.index.astype(int).max()1)
+ network.add("Transformer", '99999', bus0="25766", bus1="18967",
+ x=0.135 / 300, r=0.0, tap_ratio=1, s_nom=300)
+ """
+ According to:
+ https://assets.ctfassets.net/xytfb1vrn7of/NZO8x4rKesAcYGGcG4SQg/b780d6a3ca4c2600ab51a30b70950bb1/netzschemaplan-110-kv.pdf
+ the following lines are missing:
+ """
+ add_110kv_line("18967", "22449", overhead=True) # visible in OSM & DSO map
+ add_110kv_line("21165", "24068", overhead=True) # visible in OSM & DSO map
+ add_110kv_line("23782", "24089", overhead=True)
+ # visible in DSO map & OSM till 1 km from bus1
+ """
+ Umspannwerk Möhringen (bus 23697)
+ https://de.wikipedia.org/wiki/Umspannwerk_M%C3%B6hringen
+ there should be two connections:
+ to Sindelfingen (2*110kV)
+ to Wendingen (former 220kV, now 2*110kV)
+ the line to Sindelfingen is connected, but the connection of Sindelfingen
+ itself to 380kV is missing:
+ """
+ add_110kv_line("19962", "27671", overhead=True) # visible in OSM & DSO map
+ add_110kv_line("19962", "27671", overhead=True)
+ """
+ line to Wendingen is missing, probably because it ends shortly before the
+ way of the substation and is connected via cables:
+ """
+ add_110kv_line("23697", "24090", overhead=True) # visible in OSM & DSO map
+ add_110kv_line("23697", "24090", overhead=True)
+
+ # Lehrte
+ """
+ Lehrte: 220kV Bus located outsinde way of Betriebszentrtum Lehrte and
+ therefore not connected:
+ """
+
+ def add_220kv_line(bus0, bus1, overhead=False):
+ new_line = str(network.lines.index.astype(int).max() + 1)
+ if not overhead:
+ network.add("Line", new_line, bus0=bus0, bus1=bus1, s_nom=550)
+ else:
+ network.add("Line", new_line, bus0=bus0, bus1=bus1, s_nom=520)
+ network.lines.loc[new_line, "scn_name"] = "Status Quo"
+ network.lines.loc[new_line, "v_nom"] = 220
+ network.lines.loc[new_line, "version"] = "added_manually"
+ network.lines.loc[new_line, "frequency"] = 50
+ network.lines.loc[new_line, "cables"] = 3.0
+ network.lines.loc[new_line, "country"] = 'DE'
+ network.lines.loc[new_line, "length"] = (
+ pypsa.geo.haversine(network.buses.loc[bus0, ["x", "y"]],
+ network.buses.loc[bus1, ["x", "y"]])[0][0] *
+ 1.2)
+ if not overhead:
+ network.lines.loc[new_line, "r"] = (network.lines.
+ loc[new_line, "length"] *
+ 0.0176)
+ network.lines.loc[new_line, "g"] = 0
+ # or: (network.lines.loc[new_line, "length"]*67e-9)
+ network.lines.loc[new_line, "x"] = (network.lines.
+ loc[new_line, "length"] *
+ 0.3e-3)
+ network.lines.loc[new_line, "b"] = (network.lines.
+ loc[new_line, "length"] *
+ 210e-9)
+
+ elif overhead:
+ network.lines.loc[new_line, "r"] = (network.lines.
+ loc[new_line, "length"] *
+ 0.05475)
+ network.lines.loc[new_line, "g"] = 0
+ # or: (network.lines.loc[new_line, "length"]*30e-9)
+ network.lines.loc[new_line, "x"] = (network.lines.
+ loc[new_line, "length"] * 1e-3)
+ network.lines.loc[new_line, "b"] = (network.lines.
+ loc[new_line, "length"] * 11e-9
+ )
+
+ add_220kv_line("266", "24633", overhead=True)
+ return network
+
+
def convert_capital_costs(network, start_snapshot, end_snapshot, p=0.05, T=40):
""" Convert capital_costs to fit to pypsa and caluculated time
----------
@@ -662,25 +1301,278 @@ def convert_capital_costs(network, start_snapshot, end_snapshot, p=0.05, T=40):
network.links.capital_cost = network.links.capital_cost + 400000
# Calculate present value of an annuity (PVA)
- PVA = (1 / p) - (1 / (p*(1 + p) ** T))
+ PVA = (1 / p) - (1 / (p * (1 + p) ** T))
#
network.lines.loc[network.lines.s_nom_extendable == True,
'capital_cost'] = (network.lines.capital_cost /
- (PVA * (8760/(end_snapshot -
- start_snapshot + 1))))
+ (PVA * (8760 / (end_snapshot -
+ start_snapshot + 1))))
network.links.loc[network.links.p_nom_extendable == True,
'capital_cost'] = network.\
- links.capital_cost / (PVA * (8760//(end_snapshot -
- start_snapshot + 1)))
+ links.capital_cost / (PVA * (8760 // (end_snapshot -
+ start_snapshot + 1)))
network.transformers.loc[network.transformers.s_nom_extendable == True,
'capital_cost'] = network.\
transformers.capital_cost / \
- (PVA * (8760//(end_snapshot - start_snapshot + 1)))
+ (PVA * (8760 // (end_snapshot - start_snapshot + 1)))
network.storage_units.loc[network.storage_units.
p_nom_extendable == True,
'capital_cost'] = network.\
- storage_units.capital_cost / (8760//(end_snapshot -
- start_snapshot + 1))
+ storage_units.capital_cost / (8760 // (end_snapshot -
+ start_snapshot + 1))
return network
+
+
+def find_snapshots(network, carrier, maximum = True, minimum = True, n = 3):
+
+ """
+ Function that returns snapshots with maximum and/or minimum feed-in of
+ selected carrier.
+
+ Parameters
+ ----------
+ network : :class:`pypsa.Network
+ Overall container of PyPSA
+ carrier: str
+ Selected carrier of generators
+ maximum: bool
+ Choose if timestep of maximal feed-in is returned.
+ minimum: bool
+ Choose if timestep of minimal feed-in is returned.
+ n: int
+ Number of maximal/minimal snapshots
+
+ Returns
+ -------
+ calc_snapshots : 'pandas.core.indexes.datetimes.DatetimeIndex'
+ List containing snapshots
+ """
+
+ if carrier == 'residual load':
+ power_plants = network.generators[network.generators.carrier.
+ isin(['solar', 'wind', 'wind_onshore'])]
+ power_plants_t = network.generators.p_nom[power_plants.index] * \
+ network.generators_t.p_max_pu[power_plants.index]
+ load = network.loads_t.p_set.sum(axis=1)
+ all_renew = power_plants_t.sum(axis=1)
+ all_carrier = load - all_renew
+
+ if carrier in ('solar', 'wind', 'wind_onshore',
+ 'wind_offshore', 'run_of_river'):
+ power_plants = network.generators[network.generators.carrier
+ == carrier]
+
+ power_plants_t = network.generators.p_nom[power_plants.index] * \
+ network.generators_t.p_max_pu[power_plants.index]
+ all_carrier = power_plants_t.sum(axis=1)
+
+ if maximum and not minimum:
+ times = all_carrier.sort_values().head(n=n)
+
+ if minimum and not maximum:
+ times = all_carrier.sort_values().tail(n=n)
+
+ if maximum and minimum:
+ times = all_carrier.sort_values().head(n=n)
+ times = times.append(all_carrier.sort_values().tail(n=n))
+
+ calc_snapshots = all_carrier.index[all_carrier.index.isin(times.index)]
+
+ return calc_snapshots
+
+
+def ramp_limits(network):
+ """ Add ramping constraints to thermal power plants.
+
+ Parameters
+ ----------
+ network : :class:`pypsa.Network
+ Overall container of PyPSA
+
+ Returns
+ -------
+
+ """
+ carrier = ['coal', 'biomass', 'gas', 'oil', 'waste', 'lignite',
+ 'uranium', 'geothermal']
+ data = {'start_up_cost':[77, 57, 42, 57, 57, 77, 50, 57], #€/MW
+ 'start_up_fuel':[4.3, 2.8, 1.45, 2.8, 2.8, 4.3, 16.7, 2.8], #MWh/MW
+ 'min_up_time':[5, 2, 3, 2, 2, 5, 12, 2],
+ 'min_down_time':[7, 2, 2, 2, 2, 7, 17, 2],
+# =============================================================================
+# 'ramp_limit_start_up':[0.4, 0.4, 0.4, 0.4, 0.4, 0.6, 0.5, 0.4],
+# 'ramp_limit_shut_down':[0.4, 0.4, 0.4, 0.4, 0.4, 0.6, 0.5, 0.4]
+# =============================================================================
+ 'p_min_pu':[0.33, 0.38, 0.4, 0.38, 0.38, 0.5, 0.45, 0.38]
+ }
+ df = pd.DataFrame(data, index=carrier)
+ fuel_costs = network.generators.marginal_cost.groupby(
+ network.generators.carrier).mean()[carrier]
+ df['start_up_fuel'] = df['start_up_fuel'] * fuel_costs
+ df['start_up_cost'] = df['start_up_cost'] + df['start_up_fuel']
+ df.drop('start_up_fuel', axis=1, inplace=True)
+ for tech in df.index:
+ for limit in df.columns:
+ network.generators.loc[network.generators.carrier == tech,
+ limit] = df.loc[tech, limit]
+ network.generators.start_up_cost = network.generators.start_up_cost\
+ *network.generators.p_nom
+ network.generators.committable = True
+
+
+def get_args_setting(args, jsonpath='scenario_setting.json'):
+ """
+ Get and open json file with scenaio settings of eTraGo ``args``.
+ The settings incluedes all eTraGo specific settings of arguments and
+ parameters for a reproducible calculation.
+
+ Parameters
+ ----------
+ json_file : str
+ Default: ``scenario_setting.json``
+ Name of scenario setting json file
+
+ Returns
+ -------
+ args : dict
+ Dictionary of json file
+ """
+
+ if not jsonpath == None:
+ with open(jsonpath) as f:
+ args = json.load(f)
+
+
+ return args
+
+
+def set_line_country_tags(network):
+
+ transborder_lines_0 = network.lines[network.lines['bus0'].isin(
+ network.buses.index[network.buses['country_code'] != 'DE'])].index
+ transborder_lines_1 = network.lines[network.lines['bus1'].isin(
+ network.buses.index[network.buses['country_code']!= 'DE'])].index
+ #set country tag for lines
+ network.lines.loc[transborder_lines_0, 'country'] = \
+ network.buses.loc[network.lines.loc[transborder_lines_0, 'bus0']\
+ .values, 'country_code'].values
+
+ network.lines.loc[transborder_lines_1, 'country'] = \
+ network.buses.loc[network.lines.loc[transborder_lines_1, 'bus1']\
+ .values, 'country_code'].values
+ network.lines['country'].fillna('DE', inplace=True)
+ doubles = list(set(transborder_lines_0.intersection(transborder_lines_1)))
+ for line in doubles:
+ c_bus0 = network.buses.loc[network.lines.loc[line, 'bus0'], 'country']
+ c_bus1 = network.buses.loc[network.lines.loc[line, 'bus1'], 'country']
+ network.lines.loc[line, 'country'] = '{}{}'.format(c_bus0, c_bus1)
+
+ transborder_links_0 = network.links[network.links['bus0'].isin(
+ network.buses.index[network.buses['country_code']!= 'DE'])].index
+ transborder_links_1 = network.links[network.links['bus1'].isin(
+ network.buses.index[network.buses['country_code'] != 'DE'])].index
+
+ #set country tag for links
+ network.links.loc[transborder_links_0, 'country'] = \
+ network.buses.loc[network.links.loc[transborder_links_0, 'bus0']\
+ .values, 'country_code'].values
+
+ network.links.loc[transborder_links_1, 'country'] = \
+ network.buses.loc[network.links.loc[transborder_links_1, 'bus1']\
+ .values, 'country_code'].values
+ network.links['country'].fillna('DE', inplace=True)
+ doubles = list(set(transborder_links_0.intersection(transborder_links_1)))
+ for link in doubles:
+ c_bus0 = network.buses.loc[network.links.loc[link, 'bus0'], 'country']
+ c_bus1 = network.buses.loc[network.links.loc[link, 'bus1'], 'country']
+ network.links.loc[link, 'country'] = '{}{}'.format(c_bus0, c_bus1)
+
+
+def crossborder_capacity(network, method, capacity_factor):
+ """
+ Correct interconnector capacties.
+
+ Parameters
+ ----------
+ network : :class:`pypsa.Network
+ Overall container of PyPSA
+ method : string
+ Method of correction. Options are 'ntc_acer' and 'thermal_acer'.
+ 'ntc_acer' corrects all capacities according to values published by
+ the ACER in 2016.
+ 'thermal_acer' corrects certain capacities where our dataset most
+ likely overestimates the thermal capacity.
+ capacity_factor : float
+ branch capacity factor. Makes sure that new thermal values are adjusted
+ in the same way as the rest of the network. ntc-values are not adjusted
+ since they already include n-1 security.
+
+ """
+ if method == 'ntc_acer':
+ cap_per_country = {'AT': 4900,
+ 'CH': 2695,
+ 'CZ': 1301,
+ 'DK': 913,
+ 'FR': 3593,
+ 'LU': 2912,
+ 'NL': 2811,
+ 'PL': 280,
+ 'SE': 217,
+ 'CZAT': 574,
+ 'ATCZ': 574,
+ 'CZPL': 312,
+ 'PLCZ': 312,
+ 'ATCH': 979,
+ 'CHAT': 979,
+ 'CHFR': 2087,
+ 'FRCH': 2087,
+ 'FRLU': 364,
+ 'LUFR': 364,
+ 'SEDK': 1928,
+ 'DKSE': 1928}
+ capacity_factor = 1
+ elif method == 'thermal_acer':
+ cap_per_country = {'CH': 12000,
+ 'DK': 4000,
+ 'SEDK': 3500,
+ 'DKSE': 3500}
+ if not network.lines[network.lines.country != 'DE'].empty:
+ weighting = network.lines.loc[network.lines.country!='DE', 's_nom'].\
+ groupby(network.lines.country).transform(lambda x: x/x.sum())
+
+ weighting_links = network.links.loc[network.links.country!='DE', 'p_nom'].\
+ groupby(network.links.country).transform(lambda x: x/x.sum())
+
+ for country in cap_per_country:
+
+ index = network.lines[network.lines.country == country].index
+ index_links = network.links[network.links.country == country].index
+
+ if not network.lines[network.lines.country == country].empty:
+ network.lines.loc[index, 's_nom'] = \
+ weighting[index] * cap_per_country[country] *\
+ capacity_factor
+
+ if not network.links[network.links.country == country].empty:
+ network.links.loc[index_links, 'p_nom'] = \
+ weighting_links[index_links] * cap_per_country\
+ [country] * capacity_factor
+ if country == 'SE':
+ network.links.loc[network.links.country == country, 'p_nom'] =\
+ cap_per_country[country]
+
+ if not network.lines[network.lines.country == (country+country)].empty:
+ i_lines = network.lines[network.lines.country ==
+ country+country].index
+ network.lines.loc[i_lines, 's_nom'] = \
+ weighting[i_lines] * cap_per_country[country]*\
+ capacity_factor
+
+ if not network.links[network.links.country == (country+country)].empty:
+ i_links = network.links[network.links.country ==
+ (country+country)].index
+ network.links.loc[i_links, 'p_nom'] = \
+ weighting_links[i_links] * cap_per_country\
+ [country]*capacity_factor
diff --git a/setup.py b/setup.py
index 6898b871..444228d7 100644
--- a/setup.py
+++ b/setup.py
@@ -6,21 +6,22 @@
"Centre for Sustainable Energy Systems, "
"DLR-Institute for Networked Energy Systems")
__license__ = "GNU Affero General Public License Version 3 (AGPL-3.0)"
-__author__ = "mariusves"
+__author__ = ("ulfmueller, wolfbunke, BartelsJ, ClaraBuettner, gnn, "
+ "simnh, lukasol, s3pp, MGlauer, kimvk, MarlonSchlemminger, "
+ "mariusves")
setup(
name='eTraGo',
author='DLR VE, ZNES Flensburg',
author_email='',
- description=("electrical Transmission Grid Optimization of flexibility "
- "options for transmission grids based on PyPSA"),
- version='0.6.1',
+ description="electric transmission grid optimization",
+ version='0.7.0',
url='https://github.com/openego/eTraGo',
license="GNU Affero General Public License Version 3 (AGPL-3.0)",
packages=find_packages(),
include_package_data=True,
- install_requires=['egoio == 0.4.1',
+ install_requires=['egoio == 0.4.5',
'scikit-learn == 0.19.0',
'pandas >= 0.19.0, <=0.20.3',
'pypsa==0.11.0fork',
@@ -28,10 +29,11 @@
'geoalchemy2 >= 0.3.0, <=0.4.0',
'matplotlib >= 1.5.3, <=1.5.3',
'tsam==0.9.9',
- 'shapely'],
+ 'shapely',
+ 'oedialect'],
dependency_links=[
('git+https://git@github.com/openego/PyPSA.git'
- '@dev#egg=pypsa-0.11.0fork')],
+ '@master#egg=pypsa-0.11.0fork')],
extras_require={
'docs': [
'sphinx >= 1.4',