You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Feb 11, 2019. It is now read-only.
At the moment, Geordi rejects with a 500 error posts with a JSON string in the data field longer than 512 characters.
Loopback, the technology Geordi is based on, defaults to VARCHAR(512) on text fields.
While it would theoretically be possible to use a larger data type such as MEDIUMTEXT, this would massively increase the space taken up by the Geordi DB. Most records do not need a longer data field.
However, for those times when we do have more than 512 chars of data, I suggest the following.
The single event would be split over several rows until all the data has been logged.
We should add a part_no column which would be 0 for records that aren't split - but for those that are would be an integer, starting at 1, to denote which part of the event it is
part_no becomes part of the key - so that PK and part number can uniquely identify a part.
All other fields would be identical.
Then, we set about splitting up the data field into 512 character chunks. These would not necessarily be valid JSON on their own - but would be when re-concatenated.
At the moment, Geordi rejects with a 500 error posts with a JSON string in the
data
field longer than 512 characters.Loopback, the technology Geordi is based on, defaults to
VARCHAR(512)
on text fields.While it would theoretically be possible to use a larger data type such as
MEDIUMTEXT
, this would massively increase the space taken up by the Geordi DB. Most records do not need a longer data field.However, for those times when we do have more than 512 chars of data, I suggest the following.
The single event would be split over several rows until all the
data
has been logged.We should add a
part_no
column which would be 0 for records that aren't split - but for those that are would be an integer, starting at 1, to denote which part of the event it ispart_no
becomes part of the key - so that PK and part number can uniquely identify a part.All other fields would be identical.
Then, we set about splitting up the data field into 512 character chunks. These would not necessarily be valid JSON on their own - but would be when re-concatenated.
After updating this feature, we can make some updates to PFE to take advantage of this (it is currently getting around this by using a mixture of compression and fail-if-too-long).
(See https://github.com/zooniverse/Panoptes-Front-End/pull/2981/files#diff-02b1e88393f220684dcf30a2c2773356R80 and beyond)
The text was updated successfully, but these errors were encountered: