Skip to content

Commit

Permalink
Add vulture to find dead code (#1717)
Browse files Browse the repository at this point in the history
### What kind of change does this PR introduce?

* Adds a hook to use vulture, a tool for finding dead code segments
using static analysis and confidence intervals
* Updates the pre-commit hooks to latest versions
* Removes the obsolete `construct_moving_yearly_window` and
`unpack_moving_yearly_window` functions

### Does this PR introduce a breaking change?

No, the removed functions have been slated for removal for around since
the end of year 2023.

### Other information:

https://github.com/jendrikseipp/vulture
  • Loading branch information
Zeitsperre authored Apr 18, 2024
2 parents d27b4ed + 6e44622 commit 0738b80
Show file tree
Hide file tree
Showing 16 changed files with 68 additions and 76 deletions.
10 changes: 7 additions & 3 deletions .pre-commit-config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ repos:
args: ['--py39-plus']
exclude: 'xclim/core/indicator.py'
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v4.5.0
rev: v4.6.0
hooks:
- id: trailing-whitespace
- id: end-of-file-fixer
Expand Down Expand Up @@ -41,7 +41,7 @@ repos:
hooks:
- id: isort
- repo: https://github.com/astral-sh/ruff-pre-commit
rev: v0.3.5
rev: v0.3.7
hooks:
- id: ruff
- repo: https://github.com/pylint-dev/pylint
Expand All @@ -55,6 +55,10 @@ repos:
- id: flake8
additional_dependencies: [ 'flake8-alphabetize', 'flake8-rst-docstrings ']
args: [ '--config=.flake8' ]
- repo: https://github.com/jendrikseipp/vulture
rev: 'v2.11'
hooks:
- id: vulture
- repo: https://github.com/nbQA-dev/nbQA
rev: 1.8.5
hooks:
Expand Down Expand Up @@ -87,7 +91,7 @@ repos:
hooks:
- id: gitleaks
- repo: https://github.com/python-jsonschema/check-jsonschema
rev: 0.28.1
rev: 0.28.2
hooks:
- id: check-github-workflows
- id: check-readthedocs
Expand Down
5 changes: 5 additions & 0 deletions CHANGES.rst
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,10 @@ Announcements
^^^^^^^^^^^^^
* `xclim` has migrated its development branch name from `master` to `main`. (:issue:`1667`, :pull:`1669`).

Breaking changes
^^^^^^^^^^^^^^^^
* The previously deprecated functions ``xclim.sdba.processing.construct_moving_yearly_window`` and ``xclim.sdba.processing.unpack_moving_yearly_window`` have been removed. These functions have been replaced by ``xclim.core.calendar.stack_periods`` and ``xclim.core.calendar.unstack_periods``. (:pull:`1717`).

Bug fixes
^^^^^^^^^
* Fixed an bug in sdba's ``map_groups`` that prevented passing DataArrays with cftime coordinates if the ``sdba_encode_cf`` option was True. (:issue:`1673`, :pull:`1674`).
Expand All @@ -24,6 +28,7 @@ Internal changes
* Reorganized GitHub CI build matrices to run the doctests more consistently. (:pull:`1709`).
* Removed the experimental `numba` and `llvm` dependency installation steps in the `tox.ini` file. Added `numba@main` to the upstream dependencies. (:pull:`1709`).
* Added the `tox-gh` dependency to the development installation recipe. This will soon be required for running the `tox` test ensemble on GitHub Workflows. (:pull:`1709`).
* Added the `vulture` static code analysis tool for finding dead code to the development dependency list and linters (makefile, tox and pre-commit hooks). (:pull:`1717`).

v0.48.2 (2024-02-26)
--------------------
Expand Down
1 change: 1 addition & 0 deletions Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -57,6 +57,7 @@ lint: ## check style with flake8 and black
isort --check xclim tests
ruff xclim tests
flake8 --config=.flake8 xclim tests
vulture xclim tests
nbqa black --check docs
blackdoc --check --exclude=xclim/indices/__init__.py xclim
blackdoc --check docs
Expand Down
9 changes: 9 additions & 0 deletions pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -84,6 +84,7 @@ dev = [
"tox >=4.0",
# "tox-conda", # Will be added when a [email protected]+ compatible plugin is released.
"tox-gh >=1.3.1",
"vulture",
"xdoctest",
"yamllint",
# Documentation and examples
Expand Down Expand Up @@ -312,3 +313,11 @@ max-doc-length = 180

[tool.ruff.lint.pydocstyle]
convention = "numpy"

[tool.vulture]
exclude = []
ignore_decorators = ["@pytest.fixture"]
ignore_names = []
min_confidence = 90
paths = ["xclim", "tests"]
sort_by_size = true
2 changes: 1 addition & 1 deletion tests/test_cli.py
Original file line number Diff line number Diff line change
Expand Up @@ -374,7 +374,7 @@ def test_release_notes_failure(method, error):
assert error in results.output


def test_show_version_info(capsys):
def test_show_version_info():
runner = CliRunner()
results = runner.invoke(cli, ["show_version_info"])
assert "INSTALLED VERSIONS" in results.output
Expand Down
2 changes: 1 addition & 1 deletion tests/test_formatting.py
Original file line number Diff line number Diff line change
Expand Up @@ -47,7 +47,7 @@ def test_indicator_docstring():

def test_update_xclim_history(atmosds):
@fmt.update_xclim_history
def func(da, arg1, arg2=None, arg3=None):
def func(da, arg1, arg2=None, arg3=None): # noqa: F841
return da

out = func(atmosds.tas, 1, arg2=[1, 2], arg3=None)
Expand Down
2 changes: 1 addition & 1 deletion tests/test_sdba/conftest.py
Original file line number Diff line number Diff line change
Expand Up @@ -105,7 +105,7 @@ def qds_month():


@pytest.fixture
def ref_hist_sim_tuto(socket_enabled):
def ref_hist_sim_tuto(socket_enabled): # noqa: F841
"""Return ref, hist, sim time series of air temperature.
socket_enabled is a fixture that enables the use of the internet to download the tutorial dataset while the
Expand Down
4 changes: 3 additions & 1 deletion tests/test_units.py
Original file line number Diff line number Diff line change
Expand Up @@ -291,7 +291,9 @@ def dryness_index(


def test_declare_relative_units():
def index(data: xr.DataArray, thresh: Quantified, dthreshdt: Quantified):
def index(
data: xr.DataArray, thresh: Quantified, dthreshdt: Quantified # noqa: F841
):
return xr.DataArray(1, attrs={"units": "rad"})

index_relative = declare_relative_units(thresh="<data>", dthreshdt="<data>/[time]")(
Expand Down
2 changes: 2 additions & 0 deletions tox.ini
Original file line number Diff line number Diff line change
Expand Up @@ -37,13 +37,15 @@ deps =
isort==5.13.2
nbqa
ruff>=0.1.0
vulture
yamllint
commands_pre =
commands =
black --check xclim tests
isort --check xclim tests
ruff xclim tests
flake8 --config=.flake8 xclim tests
vulture xclim tests
nbqa black --check docs
blackdoc --check --exclude=xclim/indices/__init__.py xclim
blackdoc --check docs
Expand Down
4 changes: 2 additions & 2 deletions xclim/core/calendar.py
Original file line number Diff line number Diff line change
Expand Up @@ -885,10 +885,10 @@ def is_offset_divisor(divisor: str, offset: str):
return False
# Reconstruct offsets anchored at the start of the period
# to have comparable quantities, also get "offset" objects
mA, bA, sA, aA = parse_offset(divisor)
mA, bA, _sA, aA = parse_offset(divisor)
offAs = pd.tseries.frequencies.to_offset(construct_offset(mA, bA, True, aA))

mB, bB, sB, aB = parse_offset(offset)
mB, bB, _sB, aB = parse_offset(offset)
offBs = pd.tseries.frequencies.to_offset(construct_offset(mB, bB, True, aB))
tB = pd.date_range("1970-01-01T00:00:00", freq=offBs, periods=13)

Expand Down
2 changes: 1 addition & 1 deletion xclim/core/indicator.py
Original file line number Diff line number Diff line change
Expand Up @@ -510,7 +510,7 @@ def __new__(cls, **kwds): # noqa: C901
return super().__new__(new)

@staticmethod
def _parse_indice(compute, passed_parameters):
def _parse_indice(compute, passed_parameters): # noqa: F841
"""Parse the compute function.
- Metadata is extracted from the docstring
Expand Down
5 changes: 3 additions & 2 deletions xclim/core/options.py
Original file line number Diff line number Diff line change
Expand Up @@ -224,14 +224,15 @@ def __enter__(self):
"""Context management."""
return

def _update(self, kwargs):
@staticmethod
def _update(kwargs):
"""Update values."""
for k, v in kwargs.items():
if k in _SETTERS:
_SETTERS[k](v)
else:
OPTIONS[k] = v

def __exit__(self, option_type, value, traceback):
def __exit__(self, option_type, value, traceback): # noqa: F841
"""Context management."""
self._update(self.old)
2 changes: 1 addition & 1 deletion xclim/core/units.py
Original file line number Diff line number Diff line change
Expand Up @@ -63,7 +63,7 @@
# g is the most versatile float format.
units.default_format = "gcf"
# Switch this flag back to False. Not sure what that implies, but it breaks some tests.
units.force_ndarray_like = False
units.force_ndarray_like = False # noqa: F841
# Another alias not included by cf_xarray
units.define("@alias percent = pct")

Expand Down
2 changes: 1 addition & 1 deletion xclim/indices/fire/_cffwis.py
Original file line number Diff line number Diff line change
Expand Up @@ -892,7 +892,7 @@ def fire_weather_ufunc( # noqa: C901
ffmc0: xr.DataArray | None = None,
winter_pr: xr.DataArray | None = None,
season_mask: xr.DataArray | None = None,
start_dates: str | xr.DataArray | None = None,
start_dates: str | xr.DataArray | None = None, # noqa: F841
indexes: Sequence[str] | None = None,
season_method: str | None = None,
overwintering: bool = False,
Expand Down
7 changes: 1 addition & 6 deletions xclim/sdba/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -9,12 +9,7 @@
from . import adjustment, detrending, measures, processing, properties, utils
from .adjustment import *
from .base import Grouper
from .processing import (
construct_moving_yearly_window,
stack_variables,
unpack_moving_yearly_window,
unstack_variables,
)
from .processing import stack_variables, unstack_variables

# TODO: ISIMIP ? Used for precip freq adjustment in biasCorrection.R
# Hempel, S., Frieler, K., Warszawski, L., Schewe, J., & Piontek, F. (2013). A trend-preserving bias correction &ndash;
Expand Down
85 changes: 29 additions & 56 deletions xclim/sdba/processing.py
Original file line number Diff line number Diff line change
Expand Up @@ -5,21 +5,14 @@
"""
from __future__ import annotations

import warnings
from collections.abc import Sequence

import dask.array as dsk
import numpy as np
import xarray as xr
from xarray.core.utils import get_temp_dimname

from xclim.core.calendar import (
get_calendar,
max_doy,
parse_offset,
stack_periods,
unstack_periods,
)
from xclim.core.calendar import get_calendar, max_doy, parse_offset
from xclim.core.formatting import update_xclim_history
from xclim.core.units import convert_units_to, infer_context, units
from xclim.core.utils import uses_dask
Expand All @@ -29,6 +22,22 @@
from .nbutils import _escore
from .utils import ADDITIVE, copy_all_attrs

__all__ = [
"adapt_freq",
"escore",
"from_additive_space",
"jitter",
"jitter_over_thresh",
"jitter_under_thresh",
"normalize",
"reordering",
"stack_variables",
"standardize",
"to_additive_space",
"unstack_variables",
"unstandardize",
]


@update_xclim_history
def adapt_freq(
Expand Down Expand Up @@ -206,24 +215,24 @@ def jitter(
minimum = convert_units_to(minimum, x) if minimum is not None else 0
minimum = minimum + np.finfo(x.dtype).eps
if uses_dask(x):
jitter = dsk.random.uniform(
jitter_dist = dsk.random.uniform(
low=minimum, high=lower, size=x.shape, chunks=x.chunks
)
else:
jitter = np.random.uniform(low=minimum, high=lower, size=x.shape)
out = out.where(~((x < lower) & notnull), jitter.astype(x.dtype))
jitter_dist = np.random.uniform(low=minimum, high=lower, size=x.shape)
out = out.where(~((x < lower) & notnull), jitter_dist.astype(x.dtype))
if upper is not None:
if maximum is None:
raise ValueError("If 'upper' is given, so must 'maximum'.")
upper = convert_units_to(upper, x)
maximum = convert_units_to(maximum, x)
if uses_dask(x):
jitter = dsk.random.uniform(
jitter_dist = dsk.random.uniform(
low=upper, high=maximum, size=x.shape, chunks=x.chunks
)
else:
jitter = np.random.uniform(low=upper, high=maximum, size=x.shape)
out = out.where(~((x >= upper) & notnull), jitter.astype(x.dtype))
jitter_dist = np.random.uniform(low=upper, high=maximum, size=x.shape)
out = out.where(~((x >= upper) & notnull), jitter_dist.astype(x.dtype))

copy_all_attrs(out, x) # copy attrs and same units
return out
Expand Down Expand Up @@ -484,42 +493,6 @@ def _get_number_of_elements_by_year(time):
return int(N_in_year)


def construct_moving_yearly_window(
da: xr.Dataset, window: int = 21, step: int = 1, dim: str = "movingwin"
):
"""Deprecated function.
Use :py:func:`xclim.core.calendar.stack_periods` instead, renaming ``step`` to ``stride``.
Beware of the different default value for `dim` ("period").
"""
warnings.warn(
FutureWarning,
(
"`construct_moving_yearly_window` is deprecated and will be removed in a future version. "
f"Please use xclim.core.calendar.stack_periods(da, window={window}, stride={step}, dim='{dim}', freq='YS') instead."
),
)
return stack_periods(da, window=window, stride=step, dim=dim, freq="YS")


def unpack_moving_yearly_window(
da: xr.DataArray, dim: str = "movingwin", append_ends: bool = True
):
"""Deprecated function.
Use :py:func:`xclim.core.calendar.unstack_periods` instead.
Beware of the different default value for `dim` ("period"). The new function always behaves like ``appends_ends=True``.
"""
warnings.warn(
FutureWarning,
(
"`unpack_moving_yearly_window` is deprecated and will be removed in a future version. "
f"Please use xclim.core.calendar.unstack_periods(da, dim='{dim}') instead."
),
)
return unstack_periods(da, dim=dim)


@update_xclim_history
def to_additive_space(
data: xr.DataArray,
Expand Down Expand Up @@ -754,17 +727,17 @@ def stack_variables(ds: xr.Dataset, rechunk: bool = True, dim: str = "multivar")
# Store original arrays' attributes
attrs = {}
# sort to have coherent order with different datasets
datavars = sorted(ds.data_vars.items(), key=lambda e: e[0])
nvar = len(datavars)
for i, (nm, var) in enumerate(datavars):
data_vars = sorted(ds.data_vars.items(), key=lambda e: e[0])
nvar = len(data_vars)
for i, (nm, var) in enumerate(data_vars):
for name, attr in var.attrs.items():
attrs.setdefault("_" + name, [None] * nvar)[i] = attr
attrs.setdefault(f"_{name}", [None] * nvar)[i] = attr

# Special key used for later `unstacking`
attrs["is_variables"] = True
var_crd = xr.DataArray([nm for nm, vr in datavars], dims=(dim,), name=dim)
var_crd = xr.DataArray([nm for nm, vr in data_vars], dims=(dim,), name=dim)

da = xr.concat([vr for nm, vr in datavars], var_crd, combine_attrs="drop")
da = xr.concat([vr for nm, vr in data_vars], var_crd, combine_attrs="drop")

if uses_dask(da) and rechunk:
da = da.chunk({dim: -1})
Expand Down

0 comments on commit 0738b80

Please sign in to comment.