Skip to content

Commit

Permalink
Merge branch 'master' into fix-agg-sum
Browse files Browse the repository at this point in the history
  • Loading branch information
Zeitsperre authored Feb 16, 2024
2 parents 801895f + 5db4a75 commit 9a093f9
Show file tree
Hide file tree
Showing 5 changed files with 47 additions and 40 deletions.
5 changes: 5 additions & 0 deletions .pre-commit-config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -75,6 +75,11 @@ repos:
- id: blackdoc
additional_dependencies: [ 'black==24.2.0' ]
exclude: '(xclim/indices/__init__.py|docs/installation.rst)'
- repo: https://github.com/codespell-project/codespell
rev: v2.2.6
hooks:
- id: codespell
additional_dependencies: [ 'tomli' ]
- repo: https://github.com/gitleaks/gitleaks
rev: v8.18.2
hooks:
Expand Down
36 changes: 18 additions & 18 deletions docs/notebooks/analogs.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
"cells": [
{
"cell_type": "markdown",
"id": "f55e553d",
"id": "0",
"metadata": {},
"source": [
"# Spatial Analogues examples\n",
Expand All @@ -17,7 +17,7 @@
{
"cell_type": "code",
"execution_count": null,
"id": "242b4d7f",
"id": "1",
"metadata": {},
"outputs": [],
"source": [
Expand All @@ -32,7 +32,7 @@
},
{
"cell_type": "markdown",
"id": "e92ad451",
"id": "2",
"metadata": {},
"source": [
"## Input data\n",
Expand All @@ -43,7 +43,7 @@
{
"cell_type": "code",
"execution_count": null,
"id": "cd4d6320",
"id": "3",
"metadata": {},
"outputs": [],
"source": [
Expand All @@ -56,7 +56,7 @@
},
{
"cell_type": "markdown",
"id": "8168d92e",
"id": "4",
"metadata": {},
"source": [
"The goal is to find regions where the present climate is similar to that of a simulated future climate. We call \"candidates\" the dataset that contains the present-day indices. Here we use gridded observations provided by Natural Resources Canada (NRCan). This is the same data that was used as a reference for the bias-adjustment of the target simulation, which is essential to ensure the comparison holds.\n",
Expand All @@ -67,7 +67,7 @@
{
"cell_type": "code",
"execution_count": null,
"id": "0e9689da",
"id": "5",
"metadata": {},
"outputs": [],
"source": [
Expand All @@ -81,7 +81,7 @@
{
"cell_type": "code",
"execution_count": null,
"id": "5be4d98e",
"id": "6",
"metadata": {},
"outputs": [],
"source": [
Expand All @@ -91,7 +91,7 @@
},
{
"cell_type": "markdown",
"id": "a322a0e4",
"id": "7",
"metadata": {},
"source": [
"Let's plot the timeseries over Chibougamau for both periods to get an idea of the climate change between these two periods. For the purpose of the plot, we'll need to convert the calendar of the data as the simulation uses a `noleap` calendar."
Expand All @@ -100,7 +100,7 @@
{
"cell_type": "code",
"execution_count": null,
"id": "edeae473",
"id": "8",
"metadata": {},
"outputs": [],
"source": [
Expand All @@ -118,7 +118,7 @@
},
{
"cell_type": "markdown",
"id": "34bef449",
"id": "9",
"metadata": {},
"source": [
"All the work is encapsulated in the `xclim.analog.spatial_analogs` function. By default, the function expects that the distribution to be analysed is along the \"time\" dimension, like in our case. Inputs are datasets of indices, the target and the candidates should have the same indices and at least the `time` variable in common. Normal `xarray` broadcasting rules apply for the other dimensions.\n",
Expand All @@ -129,7 +129,7 @@
{
"cell_type": "code",
"execution_count": null,
"id": "b376b5f5",
"id": "10",
"metadata": {},
"outputs": [],
"source": [
Expand All @@ -155,7 +155,7 @@
},
{
"cell_type": "markdown",
"id": "c038fbac",
"id": "11",
"metadata": {},
"source": [
"This shows that the average temperature projected by our simulation for Chibougamau in 2041-2070 will be similar to the 1981-2010 average temperature of a region approximately extending zonally between 46°N and 47°N. Evidently, this metric is limited as it only compares the time averages. Let's run this again with the \"Zech-Aslan\" metric, one that compares the whole distribution."
Expand All @@ -164,7 +164,7 @@
{
"cell_type": "code",
"execution_count": null,
"id": "01fa92ce",
"id": "12",
"metadata": {},
"outputs": [],
"source": [
Expand All @@ -181,15 +181,15 @@
},
{
"cell_type": "markdown",
"id": "5f6116b6",
"id": "13",
"metadata": {},
"source": [
"The new map is quite similar to the previous one, but notice how the scale has changed. Each metric defines its own scale (see the docstrings), but in all cases, lower values imply fewer differences between distributions. Notice also how the best analog has moved. This illustrates a common issue with these computations : there's a lot of noise in the results, and the absolute minimum may be extremely sensitive and move all over the place."
]
},
{
"cell_type": "markdown",
"id": "68c15c41",
"id": "14",
"metadata": {},
"source": [
"These univariate analogies are interesting, but the real power of this method is that it can perform multivariate analyses."
Expand All @@ -198,7 +198,7 @@
{
"cell_type": "code",
"execution_count": null,
"id": "df89b1ab",
"id": "15",
"metadata": {},
"outputs": [],
"source": [
Expand All @@ -213,7 +213,7 @@
},
{
"cell_type": "markdown",
"id": "7f396582",
"id": "16",
"metadata": {},
"source": [
"As said just above, results depend on the metric used. For example, some of the metrics include some sort of standardization, while others don't. In the latter case, this means the absolute magnitude of the indices influences the results, i.e. analogies depend on the units. This information is written in the docstring.\n",
Expand All @@ -224,7 +224,7 @@
{
"cell_type": "code",
"execution_count": null,
"id": "476b4e69",
"id": "17",
"metadata": {},
"outputs": [],
"source": [
Expand Down
20 changes: 10 additions & 10 deletions docs/notebooks/partitioning.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
"cells": [
{
"cell_type": "markdown",
"id": "abc58a4e-17ba-4d5e-9e7f-2a5f9f02ed2c",
"id": "0",
"metadata": {},
"source": [
"# Uncertainty partitioning\n",
Expand All @@ -16,7 +16,7 @@
{
"cell_type": "code",
"execution_count": null,
"id": "0c43c0a2-6b8d-4382-ab9f-c1c8f0885b9d",
"id": "1",
"metadata": {},
"outputs": [],
"source": [
Expand All @@ -41,7 +41,7 @@
{
"cell_type": "code",
"execution_count": null,
"id": "a3f6363e-cadd-4d7a-8fad-54c578e193a0",
"id": "2",
"metadata": {},
"outputs": [],
"source": [
Expand Down Expand Up @@ -74,7 +74,7 @@
},
{
"cell_type": "markdown",
"id": "a0a95fd3-8916-42eb-af24-295672dc8b49",
"id": "3",
"metadata": {},
"source": [
"## Create an ensemble\n",
Expand All @@ -89,7 +89,7 @@
{
"cell_type": "code",
"execution_count": null,
"id": "9f694023-d81a-405f-8f1a-de62570d45e7",
"id": "4",
"metadata": {},
"outputs": [],
"source": [
Expand All @@ -115,7 +115,7 @@
},
{
"cell_type": "markdown",
"id": "c55bf634-a033-4926-a78d-1373cb5b24c0",
"id": "5",
"metadata": {},
"source": [
"Now we're able to partition the uncertainties between scenario, model, and variability using the approach from Hawkins and Sutton.\n",
Expand All @@ -126,7 +126,7 @@
{
"cell_type": "code",
"execution_count": null,
"id": "a4b9a745-0f69-4644-9da1-b7d683a49789",
"id": "6",
"metadata": {},
"outputs": [],
"source": [
Expand All @@ -138,7 +138,7 @@
},
{
"cell_type": "markdown",
"id": "41af418d-9e92-433c-800c-6ba28ff7684c",
"id": "7",
"metadata": {},
"source": [
"From there, it's relatively straightforward to compute the relative strength of uncertainties, and create graphics similar to those found in scientific papers.\n",
Expand All @@ -151,7 +151,7 @@
{
"cell_type": "code",
"execution_count": null,
"id": "a1520124-7c8d-4c69-a284-2c6e6770ca34",
"id": "8",
"metadata": {},
"outputs": [],
"source": [
Expand Down Expand Up @@ -227,7 +227,7 @@
{
"cell_type": "code",
"execution_count": null,
"id": "5e543932-b774-4845-a9f2-7773acb3b592",
"id": "9",
"metadata": {},
"outputs": [],
"source": [
Expand Down
2 changes: 1 addition & 1 deletion environment.yml
Original file line number Diff line number Diff line change
Expand Up @@ -61,7 +61,7 @@ dependencies:
- pytest-cov
- pytest-socket
- pytest-xdist >=3.2
- ruff >=0.1.0
- ruff >=0.2.0
- sphinx
- sphinx-autodoc-typehints
- sphinx-codeautolink
Expand Down
24 changes: 13 additions & 11 deletions pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -80,7 +80,7 @@ dev = [
"pytest-cov",
"pytest-socket",
"pytest-xdist[psutil] >=3.2",
"ruff >=0.1.0",
"ruff >=0.2.0",
"tokenize-rt",
"tox >=4.0",
# "tox-conda", # Will be added when a [email protected]+ compatible plugin is released.
Expand Down Expand Up @@ -257,6 +257,11 @@ exclude = [
"build",
".eggs"
]

[tool.ruff.format]
line-ending = "auto"

[tool.ruff.lint]
ignore = [
"D205",
"D400",
Expand All @@ -270,31 +275,28 @@ select = [
"W"
]

[tool.ruff.flake8-bandit]
[tool.ruff.lint.flake8-bandit]
check-typed-exception = true

[tool.ruff.flake8-import-conventions.aliases]
[tool.ruff.lint.flake8-import-conventions.aliases]
"matplotlib.pyplot" = "plt"
"xclim.indices" = "xci"
numpy = "np"
pandas = "pd"
scipy = "sp"
xarray = "xr"

[tool.ruff.format]
line-ending = "auto"

[tool.ruff.isort]
[tool.ruff.lint.isort]
known-first-party = ["xclim"]
case-sensitive = true
detect-same-package = false
lines-after-imports = 1
no-lines-before = ["future", "standard-library"]

[tool.ruff.mccabe]
[tool.ruff.lint.mccabe]
max-complexity = 20

[tool.ruff.per-file-ignores]
[tool.ruff.lint.per-file-ignores]
"docs/*.py" = ["D100", "D101", "D102", "D103"]
"tests/*.py" = ["D100", "D101", "D102", "D103"]
"xclim/**/__init__.py" = ["F401", "F403"]
Expand All @@ -305,8 +307,8 @@ max-complexity = 20
"xclim/indices/fire/_cffwis.py" = ["D103"]
"xclim/sdba/utils.py" = ["D103"]

[tool.ruff.pycodestyle]
[tool.ruff.lint.pycodestyle]
max-doc-length = 180

[tool.ruff.pydocstyle]
[tool.ruff.lint.pydocstyle]
convention = "numpy"

0 comments on commit 9a093f9

Please sign in to comment.