diff --git a/.pre-commit-config.yaml b/.pre-commit-config.yaml index 0cd47ac6e..89d93285b 100644 --- a/.pre-commit-config.yaml +++ b/.pre-commit-config.yaml @@ -75,6 +75,11 @@ repos: - id: blackdoc additional_dependencies: [ 'black==24.2.0' ] exclude: '(xclim/indices/__init__.py|docs/installation.rst)' + - repo: https://github.com/codespell-project/codespell + rev: v2.2.6 + hooks: + - id: codespell + additional_dependencies: [ 'tomli' ] - repo: https://github.com/gitleaks/gitleaks rev: v8.18.2 hooks: diff --git a/docs/notebooks/analogs.ipynb b/docs/notebooks/analogs.ipynb index 1cc7aebaf..c60f6f28e 100644 --- a/docs/notebooks/analogs.ipynb +++ b/docs/notebooks/analogs.ipynb @@ -2,7 +2,7 @@ "cells": [ { "cell_type": "markdown", - "id": "f55e553d", + "id": "0", "metadata": {}, "source": [ "# Spatial Analogues examples\n", @@ -17,7 +17,7 @@ { "cell_type": "code", "execution_count": null, - "id": "242b4d7f", + "id": "1", "metadata": {}, "outputs": [], "source": [ @@ -32,7 +32,7 @@ }, { "cell_type": "markdown", - "id": "e92ad451", + "id": "2", "metadata": {}, "source": [ "## Input data\n", @@ -43,7 +43,7 @@ { "cell_type": "code", "execution_count": null, - "id": "cd4d6320", + "id": "3", "metadata": {}, "outputs": [], "source": [ @@ -56,7 +56,7 @@ }, { "cell_type": "markdown", - "id": "8168d92e", + "id": "4", "metadata": {}, "source": [ "The goal is to find regions where the present climate is similar to that of a simulated future climate. We call \"candidates\" the dataset that contains the present-day indices. Here we use gridded observations provided by Natural Resources Canada (NRCan). This is the same data that was used as a reference for the bias-adjustment of the target simulation, which is essential to ensure the comparison holds.\n", @@ -67,7 +67,7 @@ { "cell_type": "code", "execution_count": null, - "id": "0e9689da", + "id": "5", "metadata": {}, "outputs": [], "source": [ @@ -81,7 +81,7 @@ { "cell_type": "code", "execution_count": null, - "id": "5be4d98e", + "id": "6", "metadata": {}, "outputs": [], "source": [ @@ -91,7 +91,7 @@ }, { "cell_type": "markdown", - "id": "a322a0e4", + "id": "7", "metadata": {}, "source": [ "Let's plot the timeseries over Chibougamau for both periods to get an idea of the climate change between these two periods. For the purpose of the plot, we'll need to convert the calendar of the data as the simulation uses a `noleap` calendar." @@ -100,7 +100,7 @@ { "cell_type": "code", "execution_count": null, - "id": "edeae473", + "id": "8", "metadata": {}, "outputs": [], "source": [ @@ -118,7 +118,7 @@ }, { "cell_type": "markdown", - "id": "34bef449", + "id": "9", "metadata": {}, "source": [ "All the work is encapsulated in the `xclim.analog.spatial_analogs` function. By default, the function expects that the distribution to be analysed is along the \"time\" dimension, like in our case. Inputs are datasets of indices, the target and the candidates should have the same indices and at least the `time` variable in common. Normal `xarray` broadcasting rules apply for the other dimensions.\n", @@ -129,7 +129,7 @@ { "cell_type": "code", "execution_count": null, - "id": "b376b5f5", + "id": "10", "metadata": {}, "outputs": [], "source": [ @@ -155,7 +155,7 @@ }, { "cell_type": "markdown", - "id": "c038fbac", + "id": "11", "metadata": {}, "source": [ "This shows that the average temperature projected by our simulation for Chibougamau in 2041-2070 will be similar to the 1981-2010 average temperature of a region approximately extending zonally between 46°N and 47°N. Evidently, this metric is limited as it only compares the time averages. Let's run this again with the \"Zech-Aslan\" metric, one that compares the whole distribution." @@ -164,7 +164,7 @@ { "cell_type": "code", "execution_count": null, - "id": "01fa92ce", + "id": "12", "metadata": {}, "outputs": [], "source": [ @@ -181,7 +181,7 @@ }, { "cell_type": "markdown", - "id": "5f6116b6", + "id": "13", "metadata": {}, "source": [ "The new map is quite similar to the previous one, but notice how the scale has changed. Each metric defines its own scale (see the docstrings), but in all cases, lower values imply fewer differences between distributions. Notice also how the best analog has moved. This illustrates a common issue with these computations : there's a lot of noise in the results, and the absolute minimum may be extremely sensitive and move all over the place." @@ -189,7 +189,7 @@ }, { "cell_type": "markdown", - "id": "68c15c41", + "id": "14", "metadata": {}, "source": [ "These univariate analogies are interesting, but the real power of this method is that it can perform multivariate analyses." @@ -198,7 +198,7 @@ { "cell_type": "code", "execution_count": null, - "id": "df89b1ab", + "id": "15", "metadata": {}, "outputs": [], "source": [ @@ -213,7 +213,7 @@ }, { "cell_type": "markdown", - "id": "7f396582", + "id": "16", "metadata": {}, "source": [ "As said just above, results depend on the metric used. For example, some of the metrics include some sort of standardization, while others don't. In the latter case, this means the absolute magnitude of the indices influences the results, i.e. analogies depend on the units. This information is written in the docstring.\n", @@ -224,7 +224,7 @@ { "cell_type": "code", "execution_count": null, - "id": "476b4e69", + "id": "17", "metadata": {}, "outputs": [], "source": [ diff --git a/docs/notebooks/partitioning.ipynb b/docs/notebooks/partitioning.ipynb index f650247c7..a6c8d7100 100644 --- a/docs/notebooks/partitioning.ipynb +++ b/docs/notebooks/partitioning.ipynb @@ -2,7 +2,7 @@ "cells": [ { "cell_type": "markdown", - "id": "abc58a4e-17ba-4d5e-9e7f-2a5f9f02ed2c", + "id": "0", "metadata": {}, "source": [ "# Uncertainty partitioning\n", @@ -16,7 +16,7 @@ { "cell_type": "code", "execution_count": null, - "id": "0c43c0a2-6b8d-4382-ab9f-c1c8f0885b9d", + "id": "1", "metadata": {}, "outputs": [], "source": [ @@ -41,7 +41,7 @@ { "cell_type": "code", "execution_count": null, - "id": "a3f6363e-cadd-4d7a-8fad-54c578e193a0", + "id": "2", "metadata": {}, "outputs": [], "source": [ @@ -74,7 +74,7 @@ }, { "cell_type": "markdown", - "id": "a0a95fd3-8916-42eb-af24-295672dc8b49", + "id": "3", "metadata": {}, "source": [ "## Create an ensemble\n", @@ -89,7 +89,7 @@ { "cell_type": "code", "execution_count": null, - "id": "9f694023-d81a-405f-8f1a-de62570d45e7", + "id": "4", "metadata": {}, "outputs": [], "source": [ @@ -115,7 +115,7 @@ }, { "cell_type": "markdown", - "id": "c55bf634-a033-4926-a78d-1373cb5b24c0", + "id": "5", "metadata": {}, "source": [ "Now we're able to partition the uncertainties between scenario, model, and variability using the approach from Hawkins and Sutton.\n", @@ -126,7 +126,7 @@ { "cell_type": "code", "execution_count": null, - "id": "a4b9a745-0f69-4644-9da1-b7d683a49789", + "id": "6", "metadata": {}, "outputs": [], "source": [ @@ -138,7 +138,7 @@ }, { "cell_type": "markdown", - "id": "41af418d-9e92-433c-800c-6ba28ff7684c", + "id": "7", "metadata": {}, "source": [ "From there, it's relatively straightforward to compute the relative strength of uncertainties, and create graphics similar to those found in scientific papers.\n", @@ -151,7 +151,7 @@ { "cell_type": "code", "execution_count": null, - "id": "a1520124-7c8d-4c69-a284-2c6e6770ca34", + "id": "8", "metadata": {}, "outputs": [], "source": [ @@ -227,7 +227,7 @@ { "cell_type": "code", "execution_count": null, - "id": "5e543932-b774-4845-a9f2-7773acb3b592", + "id": "9", "metadata": {}, "outputs": [], "source": [ diff --git a/environment.yml b/environment.yml index a2211ea6b..ecb6c521c 100644 --- a/environment.yml +++ b/environment.yml @@ -61,7 +61,7 @@ dependencies: - pytest-cov - pytest-socket - pytest-xdist >=3.2 - - ruff >=0.1.0 + - ruff >=0.2.0 - sphinx - sphinx-autodoc-typehints - sphinx-codeautolink diff --git a/pyproject.toml b/pyproject.toml index 43721cfa1..a8d0506bd 100644 --- a/pyproject.toml +++ b/pyproject.toml @@ -80,7 +80,7 @@ dev = [ "pytest-cov", "pytest-socket", "pytest-xdist[psutil] >=3.2", - "ruff >=0.1.0", + "ruff >=0.2.0", "tokenize-rt", "tox >=4.0", # "tox-conda", # Will be added when a tox@v4.0+ compatible plugin is released. @@ -257,6 +257,11 @@ exclude = [ "build", ".eggs" ] + +[tool.ruff.format] +line-ending = "auto" + +[tool.ruff.lint] ignore = [ "D205", "D400", @@ -270,10 +275,10 @@ select = [ "W" ] -[tool.ruff.flake8-bandit] +[tool.ruff.lint.flake8-bandit] check-typed-exception = true -[tool.ruff.flake8-import-conventions.aliases] +[tool.ruff.lint.flake8-import-conventions.aliases] "matplotlib.pyplot" = "plt" "xclim.indices" = "xci" numpy = "np" @@ -281,20 +286,17 @@ pandas = "pd" scipy = "sp" xarray = "xr" -[tool.ruff.format] -line-ending = "auto" - -[tool.ruff.isort] +[tool.ruff.lint.isort] known-first-party = ["xclim"] case-sensitive = true detect-same-package = false lines-after-imports = 1 no-lines-before = ["future", "standard-library"] -[tool.ruff.mccabe] +[tool.ruff.lint.mccabe] max-complexity = 20 -[tool.ruff.per-file-ignores] +[tool.ruff.lint.per-file-ignores] "docs/*.py" = ["D100", "D101", "D102", "D103"] "tests/*.py" = ["D100", "D101", "D102", "D103"] "xclim/**/__init__.py" = ["F401", "F403"] @@ -305,8 +307,8 @@ max-complexity = 20 "xclim/indices/fire/_cffwis.py" = ["D103"] "xclim/sdba/utils.py" = ["D103"] -[tool.ruff.pycodestyle] +[tool.ruff.lint.pycodestyle] max-doc-length = 180 -[tool.ruff.pydocstyle] +[tool.ruff.lint.pydocstyle] convention = "numpy"