Skip to content

Commit

Permalink
Increase maximum size of json file that will be read for annotations.
Browse files Browse the repository at this point in the history
This is still constrained more than desirable, and could use
refactoring.  This is not the whole solution for #1073.
  • Loading branch information
manthey committed Mar 2, 2023
1 parent 1c772f1 commit 6434731
Show file tree
Hide file tree
Showing 2 changed files with 12 additions and 2 deletions.
3 changes: 2 additions & 1 deletion CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,10 +5,11 @@
### Improvements
- Allow ICC correction to specify intent ([#1066](../../pull/1066))
- Make tile sources pickleable ([#1071](../../pull/1071))
- Extract scale information from more bioformats files ([#1073](../../pull/1073))
- Extract scale information from more bioformats files ([#1074](../../pull/1074))

### Bug Fixes
- The cache could reuse a class inappropriately ([#1070](../../pull/1070))
- Increase size of annotation json that will be parsed ([#1075](../../pull/1075))

## 1.20.1

Expand Down
11 changes: 10 additions & 1 deletion girder_annotation/girder_large_image_annotation/handlers.py
Original file line number Diff line number Diff line change
Expand Up @@ -124,7 +124,16 @@ def process_annotations(event): # noqa: C901
logger.error('Could not load models from the database')
return
try:
data = orjson.loads(File().open(file).read().decode())
if file['size'] > 1 * 1024 ** 3:
raise Exception('File is larger than will be read into memory.')
data = []
with File().open(file) as fptr:
while True:
chunk = fptr.read(1024 ** 2)
if not len(chunk):
break
data.append(chunk)
data = orjson.loads(b''.join(data).decode())
except Exception:
logger.error('Could not parse annotation file')
raise
Expand Down

0 comments on commit 6434731

Please sign in to comment.