-
Notifications
You must be signed in to change notification settings - Fork 24
2013.04.09 Keypoint Free: meeting report
cotemyriam edited this page Apr 11, 2013
·
1 revision
Sub Projects | Meeting Reports
- Could fine-tune lower layers wrt. discriminant signal from higher up. This applies to both bag-of-feature type models and the visual bigram model.
- Could use backprop to learn which pairs to use ("learn where to look").
- Use keypoint detector to propose patches + spatial proximity
- Could (maybe should) train the model on learned representations instead of on raw pixels.
- Rather than 3way, could use Salah's model (without orthogonalization): concatenate spatial relationship, patch1, patch2, then train autoencoder on that concatenation.
- we could train to maximize log P(patch1 | patch2, spatial rel.) + log P(patch2 | patch1, spatial rel.) + log P(spatial rel. | patch1, patch2), or the equivalent reconstruction errors
- contribute code that takes as input a TFD image and outputs a list of keypoints, explore coates pipeline with those keypoints (vincent)
- commit preliminary bigram code (roland)
- read and understand bigram code, after commited (guillaume, razvan)
- explore autoencoder on the concatenation of (spatial information and two patches) "(r,p1,p2)" similar to salah's model (xavier)