diff --git a/docs/developer/quickstart.rst b/docs/developer/quickstart.rst index 46a8eee1..5f6da394 100644 --- a/docs/developer/quickstart.rst +++ b/docs/developer/quickstart.rst @@ -1,10 +1,9 @@ Quick start ============= -This section is for developers and contributors of AeoLiS. It describes how to get started as a developer with the intention to improve or extend the source code of AeoLiS. +This section is for developers and contributors of AeoLiS. It describes how to get started as a developer with the intention of improving or extending the source code. If you are a user, please see the :ref:`user documentation ` instead. - Development workflow ----------------------- @@ -17,11 +16,12 @@ Details on how to propose and push changes to the source code are documented in AeoLiS collaborative development workflow. +.. _test set up: Set up ------- -AeoLiS is developed in Python 3.10.x and tested for versions 3.8, 3.9, and 3.11. It is recommended to use a virtual environment to install the required dependencies. +AeoLiS is developed in Python 3.10.x and tested for versions 3.9, and 3.11. It is recommended to use a virtual environment to install the dependencies. .. code-block:: bash @@ -49,7 +49,7 @@ To run the tests, run the following command from the root directory of the repos Code Style ''''''''''' -AeoLiS strive to comply with the `PEP8 `_ code style guide. We recommend `flake8 `_ to check the code style. To run flake8, run the following command from the root directory of the source code: +AeoLiS strive to comply with the `PEP8 `_ code style guide. We recommend `flake8 `_ to check the code style. To run flake8, use the following command from the `src/` directory of the source code: .. code-block:: bash @@ -58,14 +58,15 @@ AeoLiS strive to comply with the `PEP8 `_ to parse the docstrings in the source code and automatically generate this documentation. We recommend to follow the `Numpy style `_ for docstrings. The use of `type hints `_ is encouraged. For example: +AeoLiS uses `napoleon `_ to parse the docstrings in the source code and automatically generate documentation. We recommend to follow the `Numpy style `_ for docstrings. The use of `type hints `_ is encouraged. For example: .. code-block:: python def my_function(arg1:int, arg2:str) -> bool: """Summary line. - Extended summary about the fuctionality of the function, using the Numpy style. + Extended summary about the fuctionality of the function, u + sing the Numpy style. Parameters: ----------- @@ -83,7 +84,7 @@ AeoLiS uses `napoleon return True -To check the docstrings, intall `flake8-docstrings` and run the following command from the root directory of the source code: +To check the docstrings in a module, intall `flake8-docstrings` and run the following command from the `src/` directory: .. code-block:: bash diff --git a/docs/developer/testing-introduction.rst b/docs/developer/testing-introduction.rst index 0be80202..bdcc0f1f 100644 --- a/docs/developer/testing-introduction.rst +++ b/docs/developer/testing-introduction.rst @@ -2,15 +2,15 @@ Introduction to Software Testing ===================================== -This document introduces AeoLiS developers to the fundamental concepts of software testing using `Pytest `_. It provides a quick introduction to software testing and shows some examples using existing tests in AeoLiS. +This section is an introduction to the fundamental concepts of software testing using `Pytest `_. It provides a quick start to software testing and shows some examples using existing tests in AeoLiS. What is software testing? ------------------------- -In short, **software testing** is the process of verifying and validating that a *software does what it is supposed to do*. The collection of *actions* that a software does is called the *behaviour* of that software, and in order to write test for a software, the developer needs to understand its expected behaviour. The expected behaviour of a software is defined by the user's *requirements*. Thefore, writing useful tests for a software becomes easier when the developer has a clear understanding its be behaviour based on the user's requirements. +In short, **software testing** is the process of verifying and validating that a *software does what it is supposed to do*. The collection of *actions* that a software does is called the *behaviour* of the software, and in order to write test for a software, the developer needs to understand its expected behaviour. The expected behaviour of a software is defined by the user's *requirements*. Thefore, writing useful tests for a software is easier when the developer has a clear understanding of the user's requirements. -Consider a software with only the following Python function that compares two strings: +Consider a software with a single Python function which compares two strings: .. code-block:: python @@ -22,7 +22,7 @@ Consider a software with only the following Python function that compares two st return False -The expected behaviour of this function is that it should return ``True`` if the two strings are equal and ``False`` otherwise. Now that we know the expected behaviour for this function, we can write tests to verify whether the function is working as expected or not. For example, we can test the fuction for the following cases: firts, we can write a test that calls the function with two equal strings and checks whether the output is ``True``. Second, we can write a test that calls the function with two different strings and checks whether the output is ``False``. +The expected behaviour of this function is that it should return ``True`` if the two strings are equal and ``False`` otherwise. Now that we know the expected behaviour of this code, we can write tests to verify whether the function is working as expected or not. For example, we can test the following cases: firts, we can write a test that calls the function with two equal strings and checks whether the output is ``True``. Second, we can write a test that calls the function with two different strings and checks whether the output is ``False``. .. code-block:: python @@ -36,22 +36,22 @@ The expected behaviour of this function is that it should return ``True`` if the compare_strings("aeolis", "Aeolis") -What about a third case, when the two strings contain the same letters but one of them is in uppercase and the other is in lowercase? Should the function return ``True`` or ``False``? This is a question that the developer needs to answer based on the user's requirements. If the user's requirements state that the function should return ``True`` in this case, then the developer can write a test for this case as well. In other words, *it is not possible to write meaningful tests for a software without knowing its expected behaviour.* One way to find out which tests need to be written for a software is to ask: How the software should behave this particular case? And the answer lies in understand the user's requirements. +What about a third case? For example, when the two strings contain the same letters but one of them is in uppercase and the other is in lowercase. Should the function return ``True`` or ``False``? This is a question that the developer needs to answer based on the user's requirements. If the user's requirements state that the function should return ``True`` in that case, then the developer can write a test to check that as well. In short, *it is not possible to write meaningful tests for a software without knowing its expected behaviour.* A way to find out which tests need to be written for a software is to ask: **How the software should behave in this particular case?** The answers to such quesiton lay on understanding the user's requirements. .. note:: - - The example above is a very simple one. In real life, the expected behaviour of a software is not always clear. In such cases, the developer needs to discuss with the user and other developers to understand the expected behaviour of a software. - - Keep in mind that writing tests for a software is an iterative process. As the software evolves, the tests need to be updated to reflect the new behaviour of the software. Software tests provide feedback to improve the source code. If while writing a test, a case is found that the software does not handle, then the developer may need to update the source code to handle that case. + - The example above is somehow simplistic. In a real case, the expected behaviour of a software is not always clear, and the developer needs to discuss with the users and other developers to understand which behaviour is expected. + - Keep in mind that writing tests for a software is an iterative process. As the software evolves, the tests need to be updated to reflect the new behaviour of the software. Software tests provide feedback to improve the source code; writing tests can reveal cases that the software did not consider before but are imperative to handle. As a conseguence, the developer may decide to update the source code to handle such cases. How software tests work? ------------------------ -To demonstrate how software tests work we use Pytest, the test suite used in AeoLiS. Pytest is a Python package and testing framework that is used to test software. It is a third-party package that it is installed when you install aeolis using the development depencies. +To demonstrate how software tests work we will use Pytest, the test suite used in AeoLiS. Pytest is a Python package and testing framework that is used to test software. Pytest is installed if the development dependencies are installed: 1. Follow the instruction in the :doc:`quick start ` to set up a development environment. -2. Clone the aeolis-python GitHub repository to your machine and execute ``pytest`` from the root of the repsoitory using the terminal. This produces following output: +2. Clone the *aeolis-python* repository to your machine and execute ``pytest`` from the root of the repsoitory using the terminal. This produces following output: .. code-block:: console @@ -68,7 +68,7 @@ To demonstrate how software tests work we use Pytest, the test suite used in Aeo .. important:: - It takes approximately 2.5 minutes for all the test files to run. Once the tests finish running + It takes approximately 2.5 minutes for all the test to run. Executing ``pytest`` on the command line starts the execution of a set of Python files which names start with **test_** and are located in the ``aeolis/tests/`` folder. These files contain code that it is used to test the aeolis source code. - Once the tests finish running, you will see an output similar to this in the console: @@ -101,7 +101,7 @@ To demonstrate how software tests work we use Pytest, the test suite used in Aeo ======================= 18 passed, 9054 warnings in 118.43s (0:01:58) ====================== -The test **session section** displays the status of the tests. It shows the number of tests that passed and the number of tests that failed. In this example, all the tests passed. The **warnings summary** section displays the number of warnings that were generated during the execution of the tests. Warnings are a feature of Pytest that checks for any potential issues in the code, but do not affect the result of the tests. +The test **session section** displays the status of the tests. It shows the number of tests that passed and the number of tests that failed. In this example, all the tests have passed. The **warnings summary** section displays the number of warnings that were generated during the execution of the tests. Warnings are a feature of Pytest that checks for any potential issues in the code, but do not affect the result of the tests. What are software tests? ------------------------ @@ -118,37 +118,37 @@ Software tests are pieces of code that verify whether a target software is funct :lines: 30-45 .. note:: - - Classes are a way to group and organize tests in Pytest. In the example above, the class ``TestIsIterable`` is grouping the test cases for the ``is_iterable`` function. The class name is arbitrary, but it is a good practice to name it after the function that it is testing and add *Test* as prefix. + - Classes are a way to group and organize tests in Pytest. In the example above, the class ``TestIsIterable`` is grouping the test cases for the ``is_iterable`` function. The name of the class can be arbitrary, but it is a good practice to name it after the function that it is testing and add *Test* as prefix. - The ``assert`` statements are the ones that perform the actual checks. A test function can contain several ``assert`` statements. However, a best practice is to have one or two ``assert`` statements per test function. If a test function ends up with too many ``assert`` statements, it is best to split it into two or more test functions. Types of tests -------------- -Software tests are classified into different types based on their scope on the software that they test: +Software tests can be of different types based on their scope of source code they test: - **Unit tests:** test a single function or a small piece of the source code. To learn how to write unit tests for aeolis, read the section :doc:`unit testing `. Unit tests for Aeolis are located in the ``aeolis/tests/`` directory. -- **Regression tests:** a type of black-box testes where the software as a whole is tested by feeding it inputs and examining the outputs, and the internal structure of the software is rarely considered. AeoLiS currently has the following regression tests: +- **Regression tests:** a type of black-box tests where the software as a whole is tested by feeding it inputs and examining the outputs, and the internal structure of the software is rarely considered. AeoLiS currently has the following regression tests: - - `test_netCDF_creation.py `_: which tests whether a netCDF file is created upon a successful completing the modeling tasks. - - `test_netCDF_content.py `_ which tests whether the content of the outputs (netCDF files) of running a model in the **current** version of AeoLiS is consistent with the content of outputs generated in previous versions of AeoLiS. + - `test_netCDF_file_content.py `_: which tests whether a netCDF file is created upon a successful completing the modeling tasks. + - `test_output_files_generation.py `_ which tests whether the content of the outputs (netCDF files) of running a model in the **current** version of AeoLiS are consistent with the content of outputs generated in previous versions of AeoLiS. -- **Integration tests:** test how parts of a modular software work together, or how the software behaves when it interacts with software it is expected to interact with. For example, AeoLiS has a few integrations tests that check if new versions of AeoLiS are compatible with different version of Python, starting with Python 3.8. These tests are not part of the source code, but they are executed automatically in the remote code repository. To learn more about these tests, read the section on :ref:`automated testing `. +- **Integration tests:** test how parts of a modular software work together, or how the software behaves when it interacts with software it is expected to interact with. For example, AeoLiS has a few integrations tests that check if new versions of AeoLiS are compatible with different version of Python, starting with Python 3.9. These tests are not part of the source code, but they are executed automatically in the remote repository. To learn more about these tests, read the section on :ref:`automated testing `. .. _automated-testing: Automated testing in AeoLiS --------------------------- -To catch any bugs introduced by new code changes, the test suite in Aeolis (unit tests + regression tests) runs automatically on every push to the main branch of the remote repository. This automated testing is enabled by `GitHub Actions `_. Test are run whenever code is pushed to the main branch (usually through a pull request), if any of the tests fail, merging changes are disabled until all tests are satisfied. The test status is displayed on GitHub using a green tick (pass) or a red cross (fail) next to the commit message. +To catch any bugs introduced by new code changes, the test suite in Aeolis (unit tests + regression tests) runs automatically on every push to the main branch of the remote repository. This automated testing is enabled by `GitHub Actions `_. Tests are run whenever code is pushed to the main branch (usually through a pull request), if any of the tests fail, merging changes are disabled until all tests are satisfied. The status of the test is displayed on GitHub using a green tick (pass) or a red cross (fail) next to the commit message. -The file `.github/workflows/python-app.yml `_ contains the instructions for automated testing. The tests are run using the pytest framework. The tests are run on a virtual machine (Ubuntu 20.04) using the GitHub-hosted runner and on Python versions 3.8-3.11. If you want to learn more about automated testing in GitHub, check `their documentation `_. +The file `.github/workflows/run-tests.yml `_ contains the instructions for automated testing. The tests run using the Pytest framework on a virtual machine (Ubuntu 20.04) using the GitHub-hosted runner and on Python versions 3.9-3.11. If you want to learn more about automated testing in GitHub, check `their documentation `_. .. important:: .. centered:: Software tests in a nutshell - 1. A software test is essentially a piece of code, such as Python modulea or functions that are executed to verify whether a target software, for example, AeoLiS, is behaving as expected or not. + 1. A software test is essentially a piece of code, such as Python modules or functions that are executed to verify whether a target software, for example, AeoLiS, is behaving as expected or not. #. Tests produce a pass or fail status as the output to indicate whether the target software is working as expected or not. #. There are different types of tests. The most common ones are unit tests, regression tests, and integration tests. - #. A test script is a collection of functions that are prefixed with the word `test_`. These functions call the functions in various modules within the aeolis source code with certain inputs and check whether the output is as expected. If the output is as expected, the test passes. If the output is not as expected, the test fails. This is the basic idea behind software tests. For an example, see the section *Example: formatting log messages* in :doc:`unit testing ` to learn how to write test functions. + #. In Pytest, a test script is a collection of functions that are prefixed with the word `test_`. The test functions call the functions in various modules within the aeolis source code and check whether the output is as expected. If the output is as expected, the test passes. If the output is not as expected, the test fails. This is the basic idea behind software tests. For an examplse, see the section *Example: formatting log messages* in :doc:`unit testing ` to learn how to write test functions. diff --git a/docs/tutorials/2D-parabolic.rst b/docs/tutorials/2D-parabolic.rst index 074b7417..055223de 100644 --- a/docs/tutorials/2D-parabolic.rst +++ b/docs/tutorials/2D-parabolic.rst @@ -1,5 +1,5 @@ Parabolic dune model (Tutorial under construction) -======================== +=================================================== 1. Install aeolis diff --git a/docs/tutorials/sandmotor.rst b/docs/tutorials/sandmotor.rst index 05b23a9b..d66669f0 100644 --- a/docs/tutorials/sandmotor.rst +++ b/docs/tutorials/sandmotor.rst @@ -1,5 +1,5 @@ SandMotor model (Tutorial under construction) -================ +============================================== 1. Install aeolis