-
Notifications
You must be signed in to change notification settings - Fork 158
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add logger examples in eval script #59
base: master
Are you sure you want to change the base?
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -62,6 +62,22 @@ If you are looking for a simple challenge configuration that you can replicate t | |
|
||
11. To update the challenge on EvalAI, make changes in the repository and push on `challenge` branch and wait for the build to complete. | ||
|
||
### Printing and Logging in Evaluation Script | ||
`print` statements will show up on the console directly. | ||
In order to get `logging` statements from the evaluation script, ensure that the logger has a `stdout` handler added. We redirect the output from `stdout` to the submission workers console. | ||
An example logger can be created like so: | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Change to |
||
|
||
```python | ||
eval_script_logger = logging.getLogger(name='eval_script') | ||
eval_script_logger.setLevel(logging.DEBUG) | ||
|
||
handler = logging.StreamHandler(sys.stdout) | ||
handler.setLevel(logging.DEBUG) | ||
eval_script_logger.addHandler(handler) | ||
``` | ||
|
||
Then, we can use this logger anywhere in the script and the corresponding level logs will show up in the output. | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Chnage to |
||
|
||
## Create challenge using config | ||
|
||
1. Fork this repository. | ||
|
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,8 +1,20 @@ | ||
import random | ||
|
||
import logging | ||
import time | ||
import sys | ||
|
||
def evaluate(test_annotation_file, user_submission_file, phase_codename, **kwargs): | ||
print("Starting Evaluation.....") | ||
|
||
eval_script_logger = logging.getLogger(name='eval_script') | ||
eval_script_logger.setLevel(logging.DEBUG) | ||
|
||
handler = logging.StreamHandler(sys.stdout) | ||
handler.setLevel(logging.DEBUG) | ||
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s') | ||
handler.setFormatter(formatter) | ||
eval_script_logger.addHandler(handler) | ||
|
||
Comment on lines
+9
to
+17
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. can you create a new method called |
||
""" | ||
Evaluates the submission for a particular challenge phase and returns score | ||
Arguments: | ||
|
@@ -54,7 +66,7 @@ def evaluate(test_annotation_file, user_submission_file, phase_codename, **kwarg | |
] | ||
# To display the results in the result file | ||
output["submission_result"] = output["result"][0]["train_split"] | ||
print("Completed evaluation for Dev Phase") | ||
eval_script_logger.info("Completed evaluation for Dev Phase") | ||
elif phase_codename == "test": | ||
print("Evaluating for Test Phase") | ||
output["result"] = [ | ||
|
@@ -77,5 +89,5 @@ def evaluate(test_annotation_file, user_submission_file, phase_codename, **kwarg | |
] | ||
# To display the results in the result file | ||
output["submission_result"] = output["result"][0] | ||
print("Completed evaluation for Test Phase") | ||
eval_script_logger.info("Completed evaluation for Test Phase") | ||
return output |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Change to
Logging in Evaluation Script