In order to measure the eye rotation angle during cataract surgery, a detection method of eye rotation angle in cataract surgery based on depth feature matching is proposed, which extracts and matches the feature points of the preoperative reference image and the intraoperative image around the corneal limbus to calculate the intraoperative eye rotation angle. The texture features around the limbus are rich but their similarity is high, and the features are susceptible to obvious changes due to the interference of the surgical process and instruments. To solve these problems, a self-supervised image local feature extraction and description model is proposed, which combines attention convolution block (AttConvBlock) and adaptive skip connection. First, AttConvBlock enhances the model's accurate perception of orientation and spatial location information by coordinate attention. Besides, AttConvBlock improves the capacity of the model through conditionally parameterized depthwise convolutions, which can enhance the representation ability of the model for feature information. Furthermore, the adaptive skip connection fuses deep semantic information and shallow structural information, which contributes to a more discriminative description of feature points. The experimental results on the CATARACT dataset show that the proposed model has higher mean matching accuracy under each error limit than other compared models. Additionally, the mean rotation error of the proposed method is 0.740°
, and the real-time detection speed is 36.675 frames per second, meeting the requirements of accuracy and real-time detection of eye rotation angle in cataract surgery.