You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Clearly they are “of” the same thing, but in a different state. There is a comparison to be made here. (In this case, I want to animate between these screen shots.) Since I’m human, they are cropped differently.
Ideally I could provide these two images to a goat, and the goat would baaaaah at me and then return the images to me cropped such that the pixels that haven't changed line up exactly. (I might want to crop a little further, within the boundaries of this overlapping space.)
And then I could annotate both images at once, with my annotations appearing in the right place on both images.
Were I to do this without the help of a friendly ruminant, I would open up an image editor (like Acorn) and layer both images on top of each other, temporarily changing blending mode and/or opacity to line up what I wanted lined up:
Then I’d crop, reset the blending/opacity, add an annotation layer, and export both layers as separate images.
That’s how I made this GIF:
Now something interesting you can see in my two-up (shown again below) is that there are fewer meaningful identical pixels in the image than I thought there would be.
I’m thinking about how a digital simulation of our hypothetical friendly cloven-hoofed friend would go about this.
I think it's more than maximizing unchanged pixels.
It’s probably more about maximizing unchanged edges.
Edge detection is too much for Acorn.
Let me try Photoshop…
time passes… nothing happens…
OK, no.
How about a free web page instead. Wonderful.. Here’s the result of a 3x3-convolution Laplatian edge detection algorithm:
and
Cool 😎
Here’s what happens when we run difference on the two images overlapped naïvely at the origin, that is, both with top-left points at (0, 0).
Note that black means there is no difference.
Here is the sweet spot:
Note that the entire top line of text and avatar have 'disappeared'.
Overall I would expect the result to be 'blackest' when things are the 'most lined up' but that may still be too naïve. Maybe we have to exploit the fact that a whole region just went black, and we want to maximize the overall calmness / solid black-ness of the image? All guesswork.
It may be necessary to allow the user to align the images themselves (especially in pathological cases).
I can think of two different UIs for this.
One is asking the user to point out an 'anchor point' (zoomed in for pixel precision) on each image. I might choose:
Maybe the UI could look something like this:
Let me explain:
Alternatively, the interface could be just the top panel, but you drag the top layer (image B) around (and/or use arrow keys to nudge) so you can find the perfect alignment. Your 'canvas' is image A. Whatever intersects of images A and B once B is placed, that determines your crop area & alignment.
Also, sometimes I want a GIF that just flips from one to the next every 1sec, and sometimes I might want two-up output.
The text was updated successfully, but these errors were encountered:
Look at these two screen shots:
Clearly they are “of” the same thing, but in a different state. There is a comparison to be made here. (In this case, I want to animate between these screen shots.) Since I’m human, they are cropped differently.
Ideally I could provide these two images to a goat, and the goat would baaaaah at me and then return the images to me cropped such that the pixels that haven't changed line up exactly. (I might want to crop a little further, within the boundaries of this overlapping space.)
And then I could annotate both images at once, with my annotations appearing in the right place on both images.
Were I to do this without the help of a friendly ruminant, I would open up an image editor (like Acorn) and layer both images on top of each other, temporarily changing blending mode and/or opacity to line up what I wanted lined up:
Then I’d crop, reset the blending/opacity, add an annotation layer, and export both layers as separate images.
That’s how I made this GIF:
Now something interesting you can see in my two-up (shown again below) is that there are fewer meaningful identical pixels in the image than I thought there would be.
I’m thinking about how a digital simulation of our hypothetical friendly cloven-hoofed friend would go about this.
I think it's more than maximizing unchanged pixels.
It’s probably more about maximizing unchanged edges.
Edge detection is too much for Acorn.
Let me try Photoshop…
time passes… nothing happens…
OK, no.
How about a free web page instead. Wonderful.. Here’s the result of a 3x3-convolution Laplatian edge detection algorithm:
and
Cool 😎
Here’s what happens when we run difference on the two images overlapped naïvely at the origin, that is, both with top-left points at (0, 0).
Note that black means there is no difference.
Here is the sweet spot:
Note that the entire top line of text and avatar have 'disappeared'.
Overall I would expect the result to be 'blackest' when things are the 'most lined up' but that may still be too naïve. Maybe we have to exploit the fact that a whole region just went black, and we want to maximize the overall calmness / solid black-ness of the image? All guesswork.
It may be necessary to allow the user to align the images themselves (especially in pathological cases).
I can think of two different UIs for this.
One is asking the user to point out an 'anchor point' (zoomed in for pixel precision) on each image. I might choose:
Maybe the UI could look something like this:
Let me explain:
Alternatively, the interface could be just the top panel, but you drag the top layer (image B) around (and/or use arrow keys to nudge) so you can find the perfect alignment. Your 'canvas' is image A. Whatever intersects of images A and B once B is placed, that determines your crop area & alignment.
Also, sometimes I want a GIF that just flips from one to the next every 1sec, and sometimes I might want two-up output.
The text was updated successfully, but these errors were encountered: