Abstract
This paper presents a novel eye gaze tracking method with allowable head movement based on a local pattern model (LPM) and support vector regressor (SVR). The LPM, a combination of improved pixel-pattern-based texture feature (PPBTF) and local-binary-pattern texture feature (LBP), is employed to calculate texture features from the characteristics of the eyes and a new binocular vision scheme is adopted to detect the spatial coordinates of the eyes. The texture features from LPM and the spatial coordinates together are fed into support vector regressor (SVR) to match a gaze mapping function, and subsequently to track gaze direction under allowable head movement. The experimental results show that the proposed approach results in better accuracy in estimating the gaze direction than the state-of-the-art pupil center corneal reflection (PCCR) method.
1. Introduction
Eye gaze, referring to the direction of line of sight, reveals a person’s focus of attention and interest. The majority of existing gaze tracking techniques are vision based, i.e., cameras are used to capture images of the eyes. Some of these camera-based techniques are intrusive since special equipments such as chin rests, electrodes [25], and head-mounted cameras [26] are required on users. The scheme proposed in this paper is non-intrusive, that is, users are not equipped with any devices.
5. Conclusion
This paper presents a novel eye gaze-tracking scheme based on local pattern model and support vector regressor. The binocular vision method is adopted to calculate the spatial coordinates of the eyes and the LPM algorithm is utilized to describe the features of the captured eyes. With the combination of the spatial coordinates and LPM features as the input to SVR, the mapping function of gaze direction and screen coordinates can be predicted. The experimental results demonstrate the effectiveness of the proposed eye gaze tracking approach when compared with the state-of-the-art schemes. As part of future work, the proposed scheme will be extended to research such as increasing the number of estimated points and the range of the allowable head movement, and will be applied to human–computer interaction.