In this study, we introduce EyeHacker, which is an immersive virtual reality (VR) system that spatiotemporally mixes the live and recorded/edited scenes based on the measurement of the users’ gaze. This system updates the transition risk in real time by utilizing the gaze information of the users (i.e., the locus of attention) and the optical flow of scenes. Scene transitions are allowed when the risk is less than the threshold, which is modulated by the head movement data of the users (i.e., the faster their head movement, the higher will be the threshold). Using this algorithm and experience scenario prepared in advance, visual reality can be manipulated without being noticed by users (i.e., eye hacking). For example, consider a situation in which the objects around the users perpetually disappear and appear. The users would often have a strange feeling that something was wrong and, sometimes, would even find what happened but only later; they cannot visually perceive the changes in real time. Further, with the other variant of risk algorithms, the system can implement a variety of experience scenarios, resulting in reality confusion.
(seamless) 東京大学、同じ空間の現在と記録映像を気付かれずに自動差し替えする視線追跡ベースのVRシステム(SRの拡張版)を発表