Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Tests using Ultraliser fail on main branch #23

Open
asoplata opened this issue Mar 25, 2024 · 3 comments
Open

Tests using Ultraliser fail on main branch #23

asoplata opened this issue Mar 25, 2024 · 3 comments
Assignees

Comments

@asoplata
Copy link
Contributor

if I use the latest main branch, a venv that @mgeplf created, and the unstable version of Ultraliser on BB5, certain tests do not pass. In particular, if I run pytest tests/app/test_placement_hints.py::test_thalamus, I get the following input, which has been edited:

platform linux -- Python 3.10.8, pytest-8.1.1, pluggy-1.4.0
rootdir: <DELETED>/atlas-placement-hints
collected 1 item


tests/app/test_placement_hints.py F                                                                                                        [100%]

==================================================================== FAILURES ====================================================================
_________________________________________________________________ test_thalamus __________________________________________________________________

    @skip_if_no_ultraliser                                                                                                              [882/1828]
    def test_thalamus():
        runner = CliRunner()
        with runner.isolated_filesystem():
            thalamus_mock = ThalamusMock(padding=10, shape=(60, 50, 60), layer_thickness_ratio=0.15)
            direction_vectors = np.zeros(thalamus_mock.annotation.raw.shape + (3,), dtype=float)
            direction_vectors[thalamus_mock.annotation.raw > 0] = (-1.0, 0.0, 0.0)

            thalamus_mock.annotation.save_nrrd("annotation.nrrd")
            thalamus_mock.annotation.with_data(direction_vectors).save_nrrd("direction_vectors.nrrd")
            with open("hierarchy.json", "w", encoding="utf-8") as file_:
                json.dump(thalamus_mock.region_map_dict, file_)

            result = runner.invoke(
                tested.thalamus,
                [
                    "--annotation-path",
                    "annotation.nrrd",
                    "--hierarchy-path",
                    "hierarchy.json",
                    "--direction-vectors-path",
                    "direction_vectors.nrrd",
                    "--output-dir",
                    "placement_hints",
                ],
            )
            assert result.exit_code == 0, str(result.output)

            # The values selected below as upper bounds are surprisingly large, which can be explained
            # as follows. Due to the shape and the size of the simplified brain region under test,
            # voxels close to the boundary of the volume are problematic (rays issued from them
            # all miss the top surface meshes which have a too low resolution). This problem is
            # aggravated by the splitting into two hemispheres. Increasing the dimensions of the tested
            # volume reduces the number of problematic voxels but makes the test much longer.
            # See for a picture of the problematic voxel volume generated by this test.
            #
            # The testing issues reported here are very similar to those encountered with CA1 and
            # isocortex below. Renderings of the created surface meshes and of problematic volumes
            # indicate that the algorithms are working as expected.

            with open("placement_hints/distance_report.json", encoding="utf-8") as file_:
                report = json.load(file_)
                distances_report = report["before interpolation"]
>               assert (
                    distances_report[
                        "Proportion of voxels whose rays make an obtuse"
                        " angle with the mesh normal at the intersection point"
                    ]
                    <= 0.15
                )
E               assert 0.33293333333333336 <= 0.15

tests/app/test_placement_hints.py:58: AssertionError
-------------------------------------------------------------- Captured stdout call --------------------------------------------------------------
<DELETED> (lots of output from Ultraliser building the meshes)

================================================================ warnings summary ================================================================
<DELETED>
============================================================ short test summary info =============================================================
FAILED tests/app/test_placement_hints.py::test_thalamus - assert 0.33293333333333336 <= 0.15
========================================================= 1 failed, 7 warnings in 46.96s =========================================================

Note that when I ran pytest before Ultraliser was loaded, this test was not run at all, due to its @skip_if_no_ultraliser decorator. Based on the lack of mention of the substring "ultra" in testing output like https://github.com/BlueBrain/atlas-placement-hints/actions/runs/7698187655/job/20976910013 , I suspect that many CI tests are being skipped due to the lack of an Ultraliser install in the test environments.

I hope I'm wrong about this, but it's one possible explanation for why this test that relies on Ultraliser seems to fail currently, but was not caught during the Ultraliser version update PR here #18 .

@asoplata asoplata assigned arnaudon and mgeplf and unassigned mgeplf Mar 25, 2024
@asoplata
Copy link
Contributor Author

@arnaudon Can you please check that PR #18 passes the tests in an environment where Ultraliser is loaded? I strongly suspect that that's the core issue

@asoplata
Copy link
Contributor Author

I just confirmed that this same test passes if I use 337623b and load an old version of ultraliser v0.1.0 from 2020-01 , so it's almost certainly that change

@arnaudon
Copy link
Contributor

yes, you are likely right, but I don't like the oversmoothing of ultraliser, I prefer using a simple marching cube algo, as I was experimenting with cerebellum. It seems your test above is lower quality mesh, but I have no idea how/why. It will be easier to just not use ultraliser than trying to debug that tbh (which is very not interesting debug at all). I'm not part of atlas 'team' nor atlas meetings, so I have not my say in this. If this is an issue, feel free to revert this commit to previous ultraliser version. I'm not using the new version anyway, once I realised what this was doing ;)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants