-
Notifications
You must be signed in to change notification settings - Fork 25
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Listing every format that could be represented as virtual zarr #218
Comments
Unfortunately based on https://gdal.org/user/virtual_file_systems.html#jpeg2000 JPEG2000 is likely in the 'probably can't support' category. I would've liked if these datasets could be virtualized, but they're all JPEG2000 for to optimize for the download to disk model :( Another way to phrase this question, which may help the search, is which of the formats supported by GDAL's raster drivers can be virtualized? |
I like this issue! It's worth saying that anything kerchunk can chunk can be v-zarrred, right? In that repo, there are suggestions of other worthwhile formats, dicom and nifti (medical imaging) spring to mind. The latter is nice, but often whole-file-gzipped, the former is evil in the way that other 90s standards are evil, but extremely widespread. |
❤️ |
Yes, that's the idea. This function does |
Hugging Face safetensors is an interesting example - it's uncompressed so basically just like reading netCDF3, having no internal chunking. But it also puts all the metadata at the start of the file, making it a bit like cloud-optimized HDF5. See also huggingface/safetensors#527 (comment) |
If the format is simple and common, I say it should be included immediately, especially when there is a straight-forward way to check correctness.
but you can assign internal chunking. Is partial reading available in upstream at all yet? |
I raised #367 to track adding it.
This issue seems to suggest it is: zarr-developers/zarr-python#1106. But I think to take advantage of this with virtualizarr would require #199 to be merged. |
No, zarr's PR1106 only implemented it for blosc compression, something I've been arguing about for a very very long time! If you can dynamically re-imagine the chunking at runtime (which is what I tink #119 does), that that would be good enough for most practical uses - but still annoying. Zarr should just do this! i.e., the chunk IO function shouldn't just be passed "I need chunk X", but "I need section (:, s:t, i:j) of chunk X" and a way to characterise what the decompression pipeline looks like (this is OK for decompressed, some blosc, zstd maybe..., but not zlib). This was my suggestion in passing Contexts around in zarr v2. |
I don't disagree, but if we want to discuss this further we should do it on a new issue (on this repo or upstream on zarr). |
Let's list all the file formats that could potentially be represented efficiently as "virtual zarr" - i.e. zarr + chunk manifests.
The important criteria here is that the format must store data in a small number of contiguous chunks, such that access using http range requests to object storage is efficient. This rules out some formats, for example I don't think we can efficiently access this format that @kmuehlbauer mentioned over in openradar/xradar#187 (comment):
If we start thinking of Zarr as a "SuperFormat" (super as in superset, not as in super-duper), then this is the list of existing formats comprising that set of what can be referenced using chunk manifests (see zarr-developers/zarr-specs#287).
Definitely can support:
Probably can support:
.npz
filesMaybe can support?
.mat
files (specification documented here)Probably can't support:
(The checkboxes indicate whether or not a working implementation already exists - going through kerchunks' in-memory format as an intermediate or creating a
ManifestArray
directly.)cc @jhamman @d-v-b
The text was updated successfully, but these errors were encountered: