The bulk of the program tracks the orientation of the foot and its weight placement.
Every color image can be broken down into 3-channels (whiter color = higher numerical value):
Hue
Saturation
Value
By setting upper and lower limits on these values, we can separate the socks from the background.
Using the upper limits above, the feet's shapes are determined using contour edge detection.
We can classify the shapes of the feet using image moments.
These moments only respond to changes in shape, and not to changes in rotation and scale.
Watch how these changes affect the degree of similarity (redder hue = less similar).
Rotationally Invariant
Shape Variant
Scale Invariant
The feet's edges are deformed by weight distribution, distance from the camera, and obstruction of view.
These deforimities allow us to define specific states of the feet (e.g. weight on the heel).
But the edges of the left and right foot are very similiar, so distinct shapes allow us to differentiate between the left and the right foot.
Using the contour edge data above, we can determine the location, orientation, and state of the contour.
But we can't determine the degree of force being distributed by the foot.
Load cells are small transducers that measure force.
By triangulating the load cells' read values, we can approximate the location of the net force.
If that net force is closer to one foot, that foot is transmitting a larger portion.