-
Notifications
You must be signed in to change notification settings - Fork 9
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
DTS manifest schema #209
DTS manifest schema #209
Conversation
staging_service/app.py
Outdated
|
||
Path._DATA_DIR = DATA_DIR | ||
Path._META_DIR = META_DIR | ||
Path._CONCIERGE_PATH = CONCIERGE_PATH | ||
_DTS_MANIFEST_SCHEMA_PATH = DTS_MANIFEST_SCHEMA_PATH |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This line seems pointless
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't like how git manages staged files pre-commit, like the one that had this fix. I get how it does it, but I don't like it.
Fixed.
tests/test_app.py
Outdated
not_real_file = Path("not_real") | ||
while not_real_file.exists(): | ||
not_real_file = Path(str(uuid.uuid4())) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't understand why this doesn't just start with the UUID file?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I figured "not_real" was a unique enough file name. Unless it isn't, then proceed to uuids. I can change it anyway.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Changed.
wrong_schema = "not valid json" | ||
schema_file = tmp_path / f"{uuid.uuid4()}.json" | ||
schema_file.write_text(wrong_schema, encoding="utf-8") | ||
with pytest.raises(json.JSONDecodeError, match="Expecting value: line 1 column 1"): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This seems like the wrong error, naively - there's a value there, it's just the wrong value
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's the error provided by the json decoder. It's expecting a JSON value, not a bare string.
invalid = {"properties": {"some_prop": {"type": "not_real"}}} | ||
schema_file = tmp_path / f"{uuid.uuid4()}.json" | ||
schema_file.write_text(json.dumps(invalid), encoding="utf-8") | ||
exp_err = f"Schema file {schema_file} is not a valid JSON schema: 'not_real' is not valid" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There's no way to get it do spit out the path to the incorrect part of the structure? For a newbie this'll be a pain to debug
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In that particular case, the absolute path is []
, and the "schema path" is ['allOf', 0, 'type']
which is also not very helpful. Except it says "type" in it.
IMO, a newbie shouldn't be making new schemas without testing them elsewhere, throwing them at the service, and expecting it to load on startup and be super friendly.
Quality Gate passedIssues Measures |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
(changes look big - they're mostly Pipfile.lock)
Adds the DTS manifest JSON schema, does some basic validation with it and returns errors.
To do that, here's the short version of what this PR does:
import_specifications
Actual parsing via the schema comes in the next PR. This is mostly about setup and validation.