-
Notifications
You must be signed in to change notification settings - Fork 119
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update greedy_similarity_binning.py #930
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -23,6 +23,9 @@ | |
INVALID_QUERY = {'metadata.key': 'invalid'} | ||
|
||
class BuiltinTimeSeries(esta.TimeSeries): | ||
|
||
entryList=[] | ||
|
||
Comment on lines
+26
to
+28
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. @humbleOldSage this is wrong. There is not only one set of entries in the database. Please read and understand the data model from chapter 5 of my thesis. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Figured this was not necessary |
||
def __init__(self, user_id): | ||
super(BuiltinTimeSeries, self).__init__(user_id) | ||
self.key_query = lambda key: {"metadata.key": key} | ||
|
@@ -261,6 +264,9 @@ def get_entry_at_ts(self, key, ts_key, ts): | |
logging.debug("get_entry_at_ts result = %s" % retValue) | ||
return retValue | ||
|
||
def getEntryList(self): | ||
return self.entryList | ||
|
||
Comment on lines
+267
to
+269
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. again, this is wrong as well. you can't return the entry list because there is not just one. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. there are existing methods to get an entry list. the trip model uses existing methods. You should not have to make changes to the |
||
def get_data_df(self, key, time_query = None, geo_query = None, | ||
extra_query_list=None, | ||
map_fn = None): | ||
|
@@ -290,7 +296,11 @@ def to_data_df(key, entry_it, map_fn = None): | |
if map_fn is None: | ||
map_fn = BuiltinTimeSeries._to_df_entry | ||
# Dataframe doesn't like to work off an iterator - it wants everything in memory | ||
df = pd.DataFrame([map_fn(e) for e in entry_it]) | ||
|
||
for e in entry_it: | ||
BuiltinTimeSeries.entryList.append(map_fn(e)) | ||
df = pd.DataFrame(BuiltinTimeSeries.entryList) | ||
|
||
Comment on lines
+300
to
+303
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Ditto. |
||
logging.debug("Found %s results" % len(df)) | ||
if len(df) > 0: | ||
dedup_check_list = [item for item in ecwe.Entry.get_dedup_list(key) | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't see why we need a separate array here.
the bins already are indexed by the bin_id, and bin has the labels in it
So I don't see what this is adding to the data structure
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The
clustering.py
file ( which calls this function) file builds ondata_loc
dataframe that requiresHowever, the way they are stored here is .
Surely, we can extract the trip and its bin from the way they are already stored, but creating a separate array initially itself feels more efficient.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
But the problem with creating an additional data structure is that then we are changing production code to make analysis easier. We don't really need the trip -> bin mapping on production, and at some point, might want to include memoization as well. So I would prefer doing the additional work in
clustering.py
to create the dataframe from the output of the production model and not the other way around.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That makes sense. I'll remove the extra data structure and move all the additional computations to
clustering.py
.