-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Proposal: structure of a source-pairwise registration definition #32
Comments
There should be a priority setting (in the viewer) that defines which registration is considered if there are conflicts/multiple registrations for a source pair. in general: if there is a special |
I wanted to bring this repo to your attention: https://github.com/image-transform-converters/image-transform-converters/tree/master/src/main/java/itc/converters I think your spec looks like a good start. To fit this into MoBIE you would have to think how to make this fit the |
Perfect, |
A scenario: I want to view something in source Now, how will the transformation of source
|
When I want the viewer to show the "Tomo anchor map" (EM) on top of the "LowMag Overview" (LightMic) with the Light Microscopy as the "active/primary source", I need to apply all the transformations indicated with the yellow arrows for the EM source and the green arrow for the Light microscopy. Depending on the preferred registration method, I could also have chosen the "image/feature-based registration" between "Higher mag map" (EM) and "Higher mag image" (Light mic.) instead of the "landmark-based". |
@constantinpape @tischi |
@martinschorb I will update one or two projects tomorrow and send the links here. |
In your current proposal I see only one source, namely the You have several transformations for one source. As discussed, the easiest entry point to MoBIE would be if you would define different views where for a couple of sources the optimal transformations are chosen (hard-coded in the view) to optimally show those sources in a specific context. Then the user can switch between different views manually. I would recommend to start with this, because writing out those views we will probably already learn something about the criteria of which transformations to pick in which context. Then, in a second step, we could think about a way that the viewer would (semi-)automatically switch the view, depending on some criteria that we would need to define. |
If we go for one central repository of registrations, we would need to store them pairwise. There are pros and cons for either of these approaches and we should definitely discuss, whether we want to keep the registration information with each source or as a single array-type storage entity as part of the description of multi-modality. |
@martinschorb I have pushed the updated clem-yeast project onto a branch now: |
I have a few questions about the dataset structure.
|
Also, I struggle to find the communication on how to install the development version of MoBIE. |
The dev version is on MoBIE-beta. If that's not described yet in the docs would be indeed nice if you could add it! However, the current specs are nowhere because this is too unstable even for MoBIE beta. |
In the new spec, the most natural place would be in
I am not sure exactly what you mean by this. A source is loaded according to the xml AND the additional metadata in the
There are currently no multi-channel sources, @tischi would know more about bridging the compatibility with this feature.
Yes, it would be great to have a docpage for advanced users in mobie.github.io that explains how to a) install from the MoBIE-beta update page and b) install a dev version from intelliJ.
I have the platybrowser data almost ready for the new spec now (gonna write you a mail once it's all pushed). So from my side we could push something to MoBIE beta and ask people to test it whenever you're ready. |
Probably a good starting scenario to do a static test:
(from mobie/mobie-viewer-fiji#244) How could I define these global display/view scenarios (should we call them "scenes" to have clear nomenclature?)? |
I agree, that's a good starting point.
I think "scene" is a good name to clearly demark this from other things in the spec. But note that "normView" is deprecated. In the new spec we have "normalizedAffine", which is a possible "viewerTransform", see https://mobie.github.io/specs/mobie_spec.html#view-metadata. To define the different scenes you could just have a view per scene (simplified): {
"scene1": {
"viewerTransform": {"normalizedAffine": [...]}
},
"scene2": {
"viewerTransform": {"normalizedAffine": [...]}
}
} The question is how we "register" these scenes with the viewer then. I see two different options for this:
I think how to do this best still depends on how exactly we want this to map to the UI. |
Do you have a spec2 dataset that already has bookmarks somewhere? I could not find any... Essentially the "scene" syntax would behave like a bookmark, just it will be source-specific.
here the scene would simply replace the bookmark and in a single-tile-viewgrid case we are done. But then we need to include the grid:
In case there is no grid, Does this concept make sense? |
In fact, if we want to use the "scene" also to represent variable registrations as outlined above, it would need to live in In this case, the sequence of transformations would be:
And the scene(s) would not be represented as bookmark-like |
Another option would be to split the two use cases. That would be more clear and intuitive but would introduce an additional layer...
|
We would basically have 3 layers of transform for each source
on the viewer side (globally) we would have:
I am still not sure how the grid placement should behave in a single-tile case (no grid)... |
And the grid placement would need to make sure the original voxel size is maintained. |
@martinschorb you can find spec-v2 projects with bookmarks here: The bookmarks are now in For all of them the spec-v2 version is on the I will try to have a look at the transformation things later. |
I think there is a much simpler solution for all this, which is covered by the current spec already: {"views": [
"scene1": {
"sourceDisplays": {"imageDisplay": {sources: ["im1", "im2"]}},
"sourceTransforms": {
"affine": {"parameters: "[...], "sources": ["im1"]}, # registration for the first source
"affine": {"parameters: "[...], "sources": ["im2"]}, # registration for the second source
"grid": {"sources": ["im1", "im2"]}
}
}
]
} And more views could be defined in the same way, just using different parameters for the affines. As far as I can see that covers everything you describe above, without introducing the "scene" as an additional argument. |
Sounds good. There's a few questions to this:
For those, would it make sense to also have a gallery of viewers? This scenario would illustrate it:
|
This is not fully defined. I think it's a good convention to do "pixelSpace"->"physicalSpace" in the xml (i.e. only a scale transform) and then do "physical"->"physical" (i.e. an affine that rotates and translates) in the source transforms.
Grid arranges the sources in a grid. This is done in the space that results after applying the preceding transforms. So yes, for most practical purposes this will be in physical space.
I think we haven't fully solved this case yet. There are two options: find an optimal packing for the different shapes or take the maximal shape of the individual sources.
I don't think we wan't to support multiple viewers for now, because this would complicate the java code extremely. But this is ultimately up to @tischi, who is the only one who can really judge this, But note that what you describe is also possible in a single viewer: we allow duplication of sources in the grid view already. And as far as I remember previous discussion it should also then be possible to just navigate within a single source using BDV-PG code. |
This could be one way to go. |
I don't really know what you mean by mask here. Do you mean showing only a cutout of the full dataset? In any case, I don't think that's supported yet. |
Yes, that would be desired. The big advantage of the gallery is that multiple target objects can be viewed next to each other. I can see two ways of implementing this:
|
Not really. In our current spec bookmarks are
Yes, as we have discussed, that's already supported by the
Ok, I understand now. And indeed this should be handled somehow. Regarding the first option (again as mentioned above): @tischi would need to comment if that's feasible in our current viewer model or not. From the spec perspective it would be pretty simple to do, I would probably introduce a new type of transformation Regarding the second option: ok, I would not call this mask (because we already use this to refer to pixel masks elsewhere), rather |
Where exactly can I find the bookmark specs? Are these included in the grid-view tables? I could not find bookmark JSON files anywhere. |
is it all in the respective |
Yes, it's in |
#44 allows specifying the crop as a source transform now. @tischi still needs to implement this in the viewer. |
Let me know once there is something and I will then try to implement it. I already checked with @NicoKiaru and there should be code in the playground that could be used/adapted for this. |
I put one test project in |
@tischi @martinschorb |
I am taking Friday off, but I think I will get to it early next week! |
ping @NicoKiaru
The aim of this proposal is to provide means of defining pairwise registrations of sources.
In case of grouping sets of sources into a common viewing scenario (multi-modal experiment). These would be applied based on a ruleset defined by the viewer/user. (different registrations can be competitive/contradicting)
I would keep the transformation specs consistent with what is agreed on/discussed here:
ome/ngff#28
Also, each source is considered to provide a
base_transform
that places it into physical spaceThe registration spec should be stored with each data source. It would link it to a
reg_target
. Ideally a registration algorithm would populate both the metadata of the "moving" source and the "fixed" source.reg_target
would be defined as a relative path pointing to the target location (BDV-XML, OME-Zarr, path inside h5/n5/zarr/..., ...)Each registration spec should consist of (labels open for discussion):
reg_nascency
: The basic underlying procedure (image-based
orfeature-based
,landmark-based
,acquisition-based
orhardware-based
) generating the registration.reg_originator
(optional): The software used to generate the registration (ec-CLEM
,BigWarp
,elastix
,Amira
, ...)reg_coordinatebase
: The coordinate base that the registration uses (voxel
,physical
). This can be different for each originator software. I would prefer to store the resulting registration in the native coordinate frame it was generated in. This would later enable more flexibility for the viewer.reg_tf_type
: The type of transformation. (scale
,affinetransform3d
,eulerElastix
,Wrapped2DTransformAs3D
, ... ) this should match Transformation Specification ome/ngff#28 I would keep as many conventions as possible also consistent with BigWarp/BDV-PG etc.An example:
The text was updated successfully, but these errors were encountered: