Releases: OML-Team/open-metric-learning
OML 3.1.0
The update focuses on several components:
-
We added "official" texts support and the corresponding Python examples. (Note, texts support in Pipelines is not supported yet.)
-
We introduced the
RetrievalResults
(RR
) class — a container to store gallery items retrieved for given queries.
RR
provides a unified way to visualize predictions and compute metrics (if ground truths are known).
It also simplifies post-processing, where anRR
object is taken as input and anotherRR_upd
is produced as output.
Having these two objects allows comparison retrieval results visually or by metrics.
Moreover, you can easily create a chain of such post-processors.RR
is memory optimized because of using batching: in other words, it doesn't store full matrix of query-gallery distances.
(It doesn't make search approximate though).
-
We made
Model
andDataset
the only classes responsible for processing modality-specific logic.
Model
is responsible for interpreting its input dimensions: for example,BxCxHxW
for images orBxLxD
for sequences like texts.
Dataset
is responsible for preparing an item: it may useTransforms
for images orTokenizer
for texts.
Functions computing metrics likecalc_retrieval_metrics_rr
,RetrievalResults
,PairwiseReranker
, and other classes and functions are unified
to work with any modality.- We added
IVisualizableDataset
having method.visaulize()
that shows a single item. If implemented,
RetrievalResults
is able to show the layout of retrieved results.
- We added
Migration from OML 2.* [Python API]:
The easiest way to catch up with changes is to re-read the examples!
-
The recommended way of validation is to use
RetrievalResults
and functions likecalc_retrieval_metrics_rr
,
calc_fnmr_at_fmr_rr
, and others. TheEmbeddingMetrics
class is kept for use with PyTorch Lightning and inside Pipelines.
Note, the signatures ofEmbeddingMetrics
methods have been slightly changed, see Lightning examples for that. -
Since modality-specific logic is confined to
Dataset
, it doesn't outputPATHS_KEY
,X1_KEY
,X2_KEY
,Y1_KEY
, andY2_KEY
anymore.
Keys which are not modality-specific likeLABELS_KEY
,IS_GALLERY
,IS_QUERY_KEY
,CATEGORIES_KEY
are still in use. -
inference_on_images
is nowinference
and works with any modality. -
Slightly changed interfaces of
Datasets.
For example, we haveIQueryGalleryDataset
andIQueryGalleryLabeledDataset
interfaces.
The first has to be used for inference, the second one for validation. Also addedIVisualizableDataset
interface. -
Removed some internals like
IMetricDDP
,EmbeddingMetricsDDP
,calc_distance_matrix
,calc_gt_mask
,calc_mask_to_ignore
,
apply_mask_to_ignore
. These changes shouldn't affect you. Also removed code related to a pipeline with precomputed triplets.
Migration from OML 2.* [Pipelines]:
-
Feature extraction:
No changes, except for adding an optional argument —mode_for_checkpointing = (min | max)
. It may be useful
to switch between the lower, the better and the greater, the better type of metrics. -
Pairwise-postprocessing pipeline:
Slightly changed the name and arguments of thepostprocessor
sub config —pairwise_images
is nowpairwise_reranker
and doesn't need transforms.