Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update greedy_similarity_binning.py #930

Closed
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -121,6 +121,7 @@ class label to apply:
self.is_incremental = config['incremental_evaluation']

self.bins: Dict[str, Dict] = {}
self.tripLabels=[]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't see why we need a separate array here.
the bins already are indexed by the bin_id, and bin has the labels in it

"25": {
'feature_rows': ....
'labels':...
}

So I don't see what this is adding to the data structure

Copy link
Contributor Author

@humbleOldSage humbleOldSage Aug 11, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The clustering.py file ( which calls this function) file builds on data_loc dataframe that requires

Trip 1   -->    bin No. (say 2)
Trip 2.  -->    bin No. (say 3) 
Trip 3   -->   bin No.( say 2)
Trip 4   -->   bin No.( say 1)
Trip 5   -->   bin No.( say 1)
.
.
.

However, the way they are stored here is .

  "1": { 'feature_rows': [ [feature of Trip4], [feature of Trip5]]
            'labels': ...
  }
  
  "2": { 'feature_rows': [ [feature of Trip1], [feature of Trip3]]
            'labels': ...
  }
  
  "3": { 'feature_rows': [ [feature of Trip2]]
            'labels': ...
  }

Surely, we can extract the trip and its bin from the way they are already stored, but creating a separate array initially itself feels more efficient.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

But the problem with creating an additional data structure is that then we are changing production code to make analysis easier. We don't really need the trip -> bin mapping on production, and at some point, might want to include memoization as well. So I would prefer doing the additional work in clustering.py to create the dataframe from the output of the production model and not the other way around.

Copy link
Contributor Author

@humbleOldSage humbleOldSage Aug 12, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That makes sense. I'll remove the extra data structure and move all the additional computations to clustering.py.



def fit(self, trips: List[ecwc.Confirmedtrip]):
Expand Down Expand Up @@ -184,9 +185,11 @@ def _assign_bins(self, trips: List[ecwc.Confirmedtrip]):
logging.debug(f"adding trip to bin {bin_id} with features {trip_features}")
self.bins[bin_id]['feature_rows'].append(trip_features)
self.bins[bin_id]['labels'].append(trip_labels)
self.tripLabels.append(bin_id)
else:
# create new bin
new_bin_id = str(len(self.bins))
self.tripLabels.append(new_bin_id)
new_bin_record = {
'feature_rows': [trip_features],
'labels': [trip_labels],
Expand Down
12 changes: 11 additions & 1 deletion emission/storage/timeseries/builtin_timeseries.py
Original file line number Diff line number Diff line change
Expand Up @@ -23,6 +23,9 @@
INVALID_QUERY = {'metadata.key': 'invalid'}

class BuiltinTimeSeries(esta.TimeSeries):

entryList=[]

Comment on lines +26 to +28
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@humbleOldSage this is wrong. There is not only one set of entries in the database. Please read and understand the data model from chapter 5 of my thesis.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Figured this was not necessary

def __init__(self, user_id):
super(BuiltinTimeSeries, self).__init__(user_id)
self.key_query = lambda key: {"metadata.key": key}
Expand Down Expand Up @@ -261,6 +264,9 @@ def get_entry_at_ts(self, key, ts_key, ts):
logging.debug("get_entry_at_ts result = %s" % retValue)
return retValue

def getEntryList(self):
return self.entryList

Comment on lines +267 to +269
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

again, this is wrong as well. you can't return the entry list because there is not just one.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

there are existing methods to get an entry list. the trip model uses existing methods. You should not have to make changes to the BuiltinTimeseries for this change.

def get_data_df(self, key, time_query = None, geo_query = None,
extra_query_list=None,
map_fn = None):
Expand Down Expand Up @@ -290,7 +296,11 @@ def to_data_df(key, entry_it, map_fn = None):
if map_fn is None:
map_fn = BuiltinTimeSeries._to_df_entry
# Dataframe doesn't like to work off an iterator - it wants everything in memory
df = pd.DataFrame([map_fn(e) for e in entry_it])

for e in entry_it:
BuiltinTimeSeries.entryList.append(map_fn(e))
df = pd.DataFrame(BuiltinTimeSeries.entryList)

Comment on lines +300 to +303
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ditto.

logging.debug("Found %s results" % len(df))
if len(df) > 0:
dedup_check_list = [item for item in ecwe.Entry.get_dedup_list(key)
Expand Down