-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Improved color matching through a change of coordinates #1128
Comments
@sidharth-sundar Following up on some things from our meeting. How to run locallyHere's an example using the M5 objects experiment. This experiment is defined using a parameters file. See m5_objects_v0_with_mugs_subset.params. To run this experiment, you would do (from the repository root): python adam/experiments/log_experiment.py parameters/experiments/p3/m5_objects_v0_with_mugs_subset.params These params files are YAML files with some minor extensions for variable interpolation and including other YAML/params files. This file is where we implement those extensions, plus some related convenience functions. Color evaluation experimentOnce this is implemented, we'll want to evaluate how well this is working. At that point, I suggest the following experiment:
|
For clarity, writing up here an idea @sidharth-sundar had: We could also approach color matching by matching using a multivariate Gaussian distribution. I think this is worth trying. That would give us a third thing to compare with our baseline in the experiment outlined above. So we would compare results between (1) baseline, using exact RGB match, (2) CIELAB with simple matching, (3) CIELAB with multivariate matching. Note this experiment requires that we can switch ~easily between (2) and (3). Because it was so messy to add a continuous value matching threshold to the learners last time, I'd like to avoid creating a second threshold for colors, so I want them to use consistent scales for match scores. I'd also like to keep things 0-1 if possible because the bounded scale seems easier to understand. So, either do something like a multivariate hypothesis test which keeps things on a 0-1 scale consistent with the 1D case, or (if we have to) use Mahalanobis distance and change the previous code to use absolute z-score as the match score. One difficulty we'd need to solve is extending/replacing Welford's algorithm to handle the multivariate case. I haven't found a great source from a quick search. This might be useful but doesn't discuss Welford's algorithm explicitly and from a quick skim I can't tell if it's implicitly using Welford's algorithm for the one-dimensional special case. So it might be doing something like the naive 1D thing which has poor numerical properties. The author does seem to be aware of Welford's algorithm but who knows if that translates to a stable algorithm in the other post. This paper could be useful though who knows. It probably has an answer, because they explicitly discuss extending Welford's algorithm to "weighted covarinace" which includes unweighted covariance as as special case. But from a very shallow skim, it looks pretty dense. The first 3 pages seem the most likely to yield a useful answer if there's one to be found there. |
Also, it looks like this gives two generalizations of the CDF to multivariate normal distributions. I think the latter is the one we want, as it's easier to compute with than the "axis-aligned" CDF. That is, this one:
|
Linking the report Sid put together from here so that we can easily find it again if needed: |
Our current color representation is RGB, which is hard for us to match well. We've considered doing classification, either a more fine-grained manual one (difficult...) or using a model, but neither seems like a good solution for the time we have left. Our current strategy for handling color is simply to do exact RGB color matching. In theory we could switch to distance-based matching. Then we could match if the distance meets an arbitrary or a configurable threshold, or apply our new continuous feature matching strategy to learn a distribution of acceptable deviations in color. However, RGB color distance doesn't obviously correspond well with human perception, which ideally our matching should agree with.
I suspect we can make distance-based matching work well enough by doing a change of coordinates. Color coordinates are a complicated subject, but from a very cursory search, it sounds like the CIELab (or CIELAB, or (CIE) L*a*b, or Lab) color system would be a good coordinate system to try. On its color models page, Wikipedia notes:
There is an easy approximate metric we can use once we've done this change: Just calculate the Euclidean distance between the colors as points in CIELab space (see here). Then we can do distributional, distance-based matching.
Technical details
We want to implement this as a new pattern/graph node pair. The pattern node has a reference point. The pattern node always matches other pattern nodes. The pattern node matches to a graph node only if the graph node is "close enough" to the pattern node's reference point. The pattern node should track a reference point in CIELab space, and when we create a pattern node from a graph node, we just use the graph node's point in CIELab space as the reference point.
When we confirm a CIELab pattern node-pattern node match between nodes say X and Y, the pattern node X (or self) should be updated. We at least want to update the distance distribution with the distance between X's reference point and Y's. For now, we never update this reference point. We fix the reference point in place and only update the distance distribution.
Finally, where we translate YAML into a perception node, we would want to translate the (s)RGB color into a CIELab color and add a CIELab color node. Where we translate perception graphs into patterns, we want to translate the graph node into a pattern node.
Note that on a technical level, we do not want to replace the RGB colors. We want to preserve them so we can display them in the "experiment results viewer" user interface, the Angular web app. The CIELab nodes should be a distinct feature from the current RGB color nodes.
How to translate from (s)RGB into CIELab?
This is the core question that needs more research. I'm not sure of the technical details of how this works, although it sounds like this is a two step process: First, translate from sRGB to CIE XYZ, then translate from CIE XYZ to CIE L*a*b. I don't know the specific formulas. From what I saw they look complicated and they're unfortunately not in the Python standard library (unlike HSV/HSL and a few others -- "YIQ" whatever that is).
Some parameters I think we can specify now:
There are most likely other "hows" that I don't yet know I need to specify. Ask away.
Further work/out of scope for this issue
Updating the reference point
We may want to update the reference point, but that complicates things since it invalidates our distance distribution. We moved our mean an arbitrary distance, which means the distance to each of the previously observed points has also changed. That's a problem. At inference time, our reference point stays fixed, so we don't want to use a distribution for distances where the distances use a moving reference point.
One way to deal with this would be to simply throw out the old distribution. We could do this whenever the distance in one update is too large, but this may cause "boiling the frog" problems with an adversarially designed curriculum, where we do a bunch of updates that move the color from one side of color-space to the other, but none of them forces us to throw out the distance distribution and so we end up with a distance distribution whose variance is too small to capture all colors observed. It's too precise.
A better way might be to track a running mean over all points observed and "start over" whenever the mean and the reference point deviate too much. (New problem: How much is too much? Arbitrary threshold? 🙃) By "starting over" I mean we use the mean as the new reference point, and we start from an "empty" distance distribution. We'd update this running mean similar to how the existing Gaussian matcher updates the mean, except treating the color coordinates as a vector. So the new reference point is
(L_old + L_obs/n_including_obs, a_old + a_obs/n_including_obs, b_old + b_obs/n_including_obs)
i.e.cielab_old + cielab_obs/n_including_obs
(treating the coordinates as vectors).More sophisticated distance metrics
I quoted the Euclidean metric above, but there are other color distance metrics intended to more exactly approximate "perceptual uniformity" i.e. the condition where numbers that come out correspond in some sense to linear changes in perceived colors. There are several more sophisticated CIELab-based distance metrics we could use (CIE94 and CIEDE2000). We could probably develop this into its own line of work if we were so interested, comparing the different ways of matching color. However, this is a deep rabbit hole, so for now those things are out of scope.
Determining an appropriate illumination condition/"reference white"
I'm arbitrarily fixing the reference white we use to a specific one. A "smart" system might try to pick a reference white based on the whole image, and possibly other frames/camera views. This requires actually looking at the image however, and I'm not sure how much work this is -- that would be more ASU's territory. This might be an interesting thing to do, but it is almost definitely out of scope.
The text was updated successfully, but these errors were encountered: