Authors: Dixit KRISHNAKANT, Gla University kumar yogendra, GLA University
Static authentication mechanisms (e.g., passwords, biometrics) check users only at session beginning, hence subject to hijacking. Constant authentication is more secure as it monitors the users activities throughout a session. One of the potential applications of non-intrusive continuous authentication biometric in virtual reality (VR) is eye-tracking due to the availability of in-depth gaze information. Models based on gazes, however, decadent as time passes as there are changes in behavior patterns. This paper applies the 26-month GazeBaseVR dataset by comparing Transformer Encoder, DenseNet, and XGBoost in short- and long-term authentication. With Transformer Encoder and DenseNet yielding up to 97% of short-term accuracy, the accuracy reduces to 1.78% after 26 months of time have passed. The model can be retrained every now and then on up-to-date gaze data and regains more than 95% accuracy. The findings highlight the importance of adaptive learning in guaranteeing continual gaze-based authentication during a longer duration of time. It is possible that future work would make the best use of retraining time periods and apply to other behavioral indicators, such as head and hand gestures, to achieve maximum resilience of long-term VR authentication
Keywords: Continuous authentication, Eye-tracking, Virtual reality, Gaze biometrics, Adaptive models, Machine learning
Published in: 2024 Asian Conference on Communication and Networks (ASIANComNet)
Date of Publication: --
DOI: -
Publisher: IEEE