Post-hoc fixation detection from a noisy eye-tracker signal.
I'm looking for tips! I'm using an eye tracker, and brainstorming an algorithm for detecting if someone is fixating upon a target object. The most straight forward approach would be to cast a "gaze vector" from the camera center into the virtual world, and testing for an intersection of this vector with a target object. The issue is that the "gaze vector" may waver on/off the object between frames, or even be slightly and stably off the object by a few degrees.
Note that this data is going to be collected and analyzed OFF-LINE - so that I can filter the gaze signal and apply some corrections.
The ideal solution would take into account the view-dependent outline of the object projected upon the view plane, rather than relying upon a bounding box / circle. So, it makes sense that one would want to solve this problem after having already performed the perspective projection onto a viewing plane.
So, the solution I'm contemplating is this: for potential fixation targets, create a double. Place it at the same place, in the same orientation, but set alpha to 0 and scale up the size.
Fixations upon the target would be calculated by intersecting the gaze vector with the invisible target object.
Any thoughts / tips?
- gD
|