-
Notifications
You must be signed in to change notification settings - Fork 119
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Scalability fixes - Load model once per user (#944)
* Scalability fixes - Load model once per user Implementing code changes for improving scalability as per the requirements in this issue# 950 in e-mission-docs. Initial approach involves utilizing Singleton design pattern concept on instance variables to check whether model has already been loaded before attempting to load the model. Thus this should prevent model from being loaded if it is already present in the instance variable of the current class instance of BuiltinModelStorage. * Missing imports and self keyword Added import for logging. Added self parameter to pass reference to current instance object. * Refactoring label inference pipeline for scalability issues Changes after refactoring label inference pipeline to load model only once. Will be merging with previous PR branch. * Removed singleton approach Starting from scratch to implement refactored label pipeline approach * Changed results_list to result_dict Using a dictionary of lists of predictions returned for each trip instead of a simple list. This will help keep predictions of each trip separate but keep all the algorithms_wise predictions of each trip together in a list. * Fixed function call to predict_labels_with_n in TestRunGreedyModel Updated function call to predict_labels_with_n() as per latest changes to include trip_list and user_id Fixed typo in eacilp for trip.get_id() * Fixed function signature in TestRunGreedyModel eamur.predict_labels_with_n() now takes a trip instead of a trip_list. Passed a trip_list in previous commit but forgot to update parameter name. * Updated TestRunGreedyModel + Debug Cleanup Receive a prediction list of (prediction, n) for each trip in trip_list. Then for each of the predictions in the prediction list, run the assertion tests. Also, cleaned up some debug print statements from earlier commits / PRs. * Cleanup print statements Cleaning up previously used debug statements * Added debug statements to track time taken for operations. * Cleaned up some comments and debug statements. * Fixed failing analysis pipeline tests involved in Label Inference A couple of pipeline tests were failing - TestLableInferencePipeline, TestExpectationPipeline. Failed since they were calling placeholder_predict_n functions in eacili inferrers. Had to update these to now receive trip_list instead of single trip. * Removed user_id parameter Fetching from trip_list itself. Added assertion to ensure identical and only one unique user_id present for all trips. Assertion fails if multiple user_ids found for the trip_list. * Removed user_id parameter from Tests Removed from Test files and inferrers.py which contained placeholder_predictor functions. * Moved model loading step out of predict_labels_with_n Refactored code to pass in the trip model directly to predict_labels_with_n() in eamur. Moved the load model step to eacil.inferrers by using load_model() of eamur. Modified TestRunGreedyModel to use this refactored function. * Passed user_id in TestRunGreedyModel.py Failing test due to missed user_id parameter. Passed user_id to rectify. --------- Co-authored-by: Mahadik, Mukul Chandrakant <[email protected]>
- Loading branch information
1 parent
81c4314
commit 1a1b1e6
Showing
8 changed files
with
210 additions
and
149 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Oops, something went wrong.