Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix(lib-classifier): record classification times between the first and last annotation #6449

Conversation

eatyourgreens
Copy link
Contributor

@eatyourgreens eatyourgreens commented Nov 7, 2024

  • Add tests for classification started_at and finished_at timestamps, to catch Classification started_at is off by hours (or days) for continuous sessions in the same tab. #6447.
  • fix the failing test for classification start times by recording started_at when a volunteer first interacts with a task eg. making a choice selection for a survey, choosing an answer to a question, drawing a mark etc. If they go straight to Done, without updating an annotation, started_at will fall back to the old value.

Please request review from @zooniverse/frontend team or an individual member of that team.

Package

  • lib-classifier

Linked Issue and/or Talk Post

How to Review

classification.metadata.started_at should now record the time that you started a classification, proxied as the time that you first annotate a subject. You should be able to test by waiting between downloading a subject and starting a classification.

It should fix cases, mentioned by Peter Mason on Talk, where project teams are finding classifications that seem to have taken hours or days, but really only took minutes.

I have seen differences in the start time (started classifying by the volunteer) and finished times (completed classification received by zooniverse) of many hours, even days.

The new tests for classification start time and duration should fail when run with the old code for classification.metadata.started_at.
https://github.com/zooniverse/front-end-monorepo/actions/runs/11720164440/job/32644906423#step:9:5027

Logs from a failing test for classification start time, where the expected time is one hour different from the recorded time.

I've got a staging project with a variety of workflows and tasks, including drawing tasks, which now record started_at when you first draw on the subject:
https://localhost:8080/?project=eatyourgreens/-project-testing-ground&workflow=3370

Checklist

PR Creator - Please cater the checklist to fit the review needed for your code changes.
PR Reviewer - Use the checklist during your review. Each point should be checkmarked or discussed before PR approval.

General

  • Tests are passing locally and on Github
  • Documentation is up to date and changelog has been updated if appropriate
  • You can yarn panic && yarn bootstrap or docker-compose up --build and FEM works as expected
  • FEM works in all major desktop browsers: Firefox, Chrome, Edge, Safari (Use Browserstack account as needed)
  • FEM works in a mobile browser

General UX

Example Staging Project: i-fancy-cats

  • All pages of a FEM project load: Home Page, Classify Page, and About Pages
  • Can submit a classification
  • Can sign-in and sign-out
  • The component is accessible

Bug Fix

  • The PR creator has listed user actions to use when testing if bug is fixed
  • The bug is fixed
  • Unit tests are added or updated

Comment on lines +39 to +42
const userLanguage = getRoot(self)?.locale
if (userLanguage) {
self.update({ userLanguage })
}
Copy link
Contributor Author

@eatyourgreens eatyourgreens Nov 7, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This isn't related to the timestamps bug but fixes the store error in #6448.

@eatyourgreens eatyourgreens marked this pull request as ready for review November 7, 2024 09:48
@coveralls
Copy link

coveralls commented Nov 7, 2024

Coverage Status

coverage: 77.854% (-0.01%) from 77.868%
when pulling 5637ca1 on eatyourgreens:test-classification-timestamps
into c146bdf on zooniverse:master.

@eatyourgreens eatyourgreens force-pushed the test-classification-timestamps branch from 9d9cc4a to 616ece1 Compare November 7, 2024 09:56
@eatyourgreens eatyourgreens changed the title test(lib-classifier): test classification timestamps fix(lib-classifier): record classification times between the first and last annotation Nov 7, 2024
@eatyourgreens eatyourgreens force-pushed the test-classification-timestamps branch from c46b663 to d6fbdbe Compare November 7, 2024 11:03
@eatyourgreens eatyourgreens marked this pull request as draft November 7, 2024 12:37
@eatyourgreens
Copy link
Contributor Author

Tests are passing but started_at is still wrong when I check this with I Fancy Cats. Setting it back to draft while I look at that.

@eatyourgreens
Copy link
Contributor Author

eatyourgreens commented Nov 7, 2024

Figured it out: A new, empty annotation is created when the subject first loads, so the code needs to wait until annotation._inProgress is true before setting the classification start time. That happens when someone first interacts with a task eg. selecting an answer to a question.

EDIT: for a survey task, annotation starts when annotation._choiceInProgress is true. All other task annotations start when annotation._inProgress is true.

@eatyourgreens eatyourgreens marked this pull request as ready for review November 7, 2024 13:21
function _onAnnotationsChange () {
// set started at when inProgress changes from false to true
if (self.inProgress) {
self.setStartedAt()
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If you comment this out, the tests should fail.

@eatyourgreens eatyourgreens force-pushed the test-classification-timestamps branch from 837a0bd to 9cf45c4 Compare November 7, 2024 13:47
get inProgress() {
let inProgress = false
self.annotations.forEach(annotation => {
inProgress ||= annotation._inProgress || annotation._choiceInProgress
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

_choiceInProgress is used by the survey task. All other tasks use _inProgress.

- [x] Add tests for classification `started_at` and `finished_at` timestamps.
- [ ] fix the failing test for classification start times.
A new, empty annotation is created when the subject first loads, so wait until `annotation._inProgress` is true before setting the classification start time.
@goplayoutside3
Copy link
Contributor

Full comment about why this PR is being closed are here: #6447 (comment)

Some of that info here for clarity:
The frontend components that display hours a volunteer spent classifying use session_time from ERAS. If the classification is stored in Panoptes, session_time is calculated by the classification’s finished_at - started_at metadata. When Ouroboros classifications were ported over to Panoptes, steps were taken by the backend team to account for the lack of finished_at in the metadata.

session_time is where a time-limit cap is applied if the two properties were recorded several hours apart or even days apart. The time-limit cap was decided specifically with the new user stats website features in mind. We decided it was better to overestimate from a place of trust than underestimate time spent classifying in the cases where finished_at - started_at is large.

Decision:
started_at will continue to be recorded on subject load. Changing started_at to the time of first annotation misses the volunteer's time spent analyzing the subject before making an annotation, referencing the tutorial, and watching the flipbook or video subjects. For some projects annotation begins with interaction with the subject through drawing while for others it begins only once a simple choice task is completed. started_at recorded on subject load is in line with “better to overestimate from a place of trust than underestimate”.

@eatyourgreens eatyourgreens deleted the test-classification-timestamps branch November 21, 2024 23:06
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
3 participants