Augmented reality as an aid in maxillofacial surgery: Validation of a wearable system allowing maxillary repositioning

https://doi.org/10.1016/j.jcms.2014.09.001Get rights and content

Abstract

Aim

We present a newly designed, localiser-free, head-mounted system featuring augmented reality as an aid to maxillofacial bone surgery, and assess the potential utility of the device by conducting a feasibility study and validation.

Methods

Our head-mounted wearable system facilitating augmented surgery was developed as a stand-alone, video-based, see-through device in which the visual features were adapted to facilitate maxillofacial bone surgery. We implement a strategy designed to present augmented reality information to the operating surgeon. LeFort1 osteotomy was chosen as the test procedure. The system is designed to exhibit virtual planning overlaying the details of a real patient. We implemented a method allowing performance of waferless, augmented-reality assisted bone repositioning. In vitro testing was conducted on a physical replica of a human skull, and the augmented reality system was used to perform LeFort1 maxillary repositioning. Surgical accuracy was measured with the aid of an optical navigation system that recorded the coordinates of three reference points (located in anterior, posterior right, and posterior left positions) on the repositioned maxilla. The outcomes were compared with those expected to be achievable in a three-dimensional environment. Data were derived using three levels of surgical planning, of increasing complexity, and for nine different operators with varying levels of surgical skill.

Results

The mean error was 1.70 ± 0.51 mm. The axial errors were 0.89 ± 0.54 mm on the sagittal axis, 0.60 ± 0.20 mm on the frontal axis, and 1.06 ± 0.40 mm on the craniocaudal axis. The simplest plan was associated with a slightly lower mean error (1.58 ± 0.37 mm) compared with the more complex plans (medium: 1.82 ± 0.71 mm; difficult: 1.70 ± 0.45 mm). The mean error for the anterior reference point was lower (1.33 ± 0.58 mm) than those for both the posterior right (1.72 ± 0.24 mm) and posterior left points (2.05 ± 0.47 mm). No significant difference in terms of error was noticed among operators, despite variations in surgical experience. Feedback from surgeons was acceptable; all tests were completed within 15 min and the tool was considered to be both comfortable and usable in practice.

Conclusion

We used a new localiser-free, head-mounted, wearable, stereoscopic, video see-through display to develop a useful strategy affording surgeons access to augmented reality information. Our device appears to be accurate when used to assist in waferless maxillary repositioning. Our results suggest that the method can potentially be extended for use with many surgical procedures on the facial skeleton. Further, our positive results suggest that it would be appropriate to proceed to in vivo testing to assess surgical accuracy under real clinical conditions.

Introduction

Augmented reality (AR) is an innovative technology allowing merger of data from the real environment with virtual information. The virtual data may be simply informative (such as textual or numerical values relevant to what is under observation) or may consist of three-dimensional virtual objects inserted within the real environment in spatially defined positions.

In the context of image-guided surgery, improvements based on AR may represent the next significant technological development in the field, because such approaches complement and integrate the concepts of surgical navigation based on virtual reality. AR provides a surgeon with a direct perception of how virtual content, generally obtained via medical imaging, is located within an actual scene (Ferrari et al., 2009, Freschi et al., 2009). This is particularly valuable in the context of head-and-neck surgery, in which the extreme anatomical complexity has encouraged the development of several innovative devices. However, the sophistication of such surgery and the longer operative times required have compromised the widespread implementation of such devices. Moreover, the necessary equipment is expensive (Hupp, 2013, Turchetti et al., 2010). For these reasons, the technology demands both methodological and economic rationalisation.

In recent years, tools (or defined applications) employing AR have been designed and tested in the context of several surgical and medical disciplines, including maxillofacial surgery (Marmulla et al., 2005b, Marmulla et al., 2005a, Mischkowski et al., 2006, Zinser et al., 2013b), dentistry (Bruellmann et al., 2013), ENT surgery (Caversaccio et al., 2008, Nakamoto et al., 2012), neurosurgery (Inoue et al., 2013, Mahvash and Besharati Tabrizi, 2013) and general surgery (Kowalczuk et al., 2012, Azagury et al., 2012, Marzano et al., 2013). The user experiences an AR view presented with the aid of various technical modalities, such as a traditional display, a tablet display, or a wearable display (Freschi et al., 2009, Mischkowski et al., 2006, Mezzana et al., 2011, Shenai et al., 2011, Gavaghan et al., 2012, Deng et al., 2013, Suenaga et al., 2013). Nevertheless, as is true of many emerging technologies, no standard method by which AR technology could/should be transferred to clinical practice has yet been developed (Dixon et al., 2013).

Bearing these facts in mind, we used a new localiser-free, head-mounted, stereoscopic, video see-through display to develop a useful strategy for delivery of AR information to the surgeon. Our study is the result of collaboration between the EndoCAS Laboratory of the University of Pisa (Italy) and the Maxillofacial Surgery Unit of the S. Orsola-Malpighi University Hospital of Bologna (Italy).

For brevity, the system will be termed the “wearable augmented reality for medicine” (WARM) device. The aim of the present study was to describe our new tool and to validate the accuracy thereof when used as an aid during surgery on facial bones. We also explore the potential for its wider application in maxillofacial surgery in general.

Section snippets

The WARM device

The device (Fig. 1) is based on a lightweight, stereoscopic head-mounted display (HMD) that is widely available; this is the Z800 instrument of eMagin (Bellevue, WA, USA). A support placed in front of the HMD holds two USB SXGA cameras (uEye UI-1646LE; IDS, Obersulm, Germany) and a 1/3’’ image sensor placed precisely in front of the user's eyes. Two optics (mounted on either camera) ensure an anthropometric field of view. Augmented reality is provided by software that runs on conventional

Results

The results are shown in Table 1. The mean error was 1.70 ± 0.51 mm. The axial errors were 0.89 ± 0.54 mm on the sagittal axis, 0.60 ± 0.20 mm on the frontal axis, and 1.06 ± 0.40 mm on the craniocaudal axis. The simplest plan was associated with a slightly lower mean error (1.58 ± 0.37 mm) than the more complex plans (medium: 1.82 ± 0.71 mm; difficult: 1.70 ± 0.45 mm). The mean error for the anterior reference point was lower (1.33 ± 0.58 mm) than those for the posterior right (1.72 ± 0.24 mm)

Discussion

In recent years, the discipline of maxillofacial surgery has undergone a remarkable rate of technological innovation. This is because the complex three-dimensional anatomy of the face, together with the need for surgical precision and the increasing number of requests for morphological surgery, have resulted in surgeons demanding advanced technological assistance. Thus, virtual planning software and navigation systems are today widely used by maxillofacial surgeons (Mazzoni et al., 2010, Zinser

Conclusion

We used a new, localiser-free, head-mounted, stereoscopic, video see-through display to develop a useful strategy affording the surgeon access to AR information. Our results suggest that the WARM device would be accurate when used to assist in waferless maxillary repositioning during the LeFort 1 orthognathic procedure. Further, our data suggest that the method can be extended to aid the performance of many surgical procedures on the facial skeleton. Also, in vivo testing should be performed to

Conflict of interest statement

This work was partially supported for EndoCAS by Opera Project (Advanced OPERAting room). Tuscany Regional Funds: PAR FAS 2007–2013 Azione 1.1 P.I.R. 1.1.B.

The authors have no financial interest or personal relationships with other people or organizations that may have inappropriately influenced the work presented here.

References (26)

  • W. Deng et al.

    Easy-to-use augmented reality neuronavigation using a wireless tablet PC

    Stereotact Funct Neurosurg

    (2013)
  • B.J. Dixon et al.

    Surgeons blinded by enhanced navigation: the effect of augmented reality on attention

    Surg Endosc

    (2013)
  • V. Ferrari et al.

    A 3-D mixed-reality system for stereoscopic visualization of medical dataset

    IEEE Trans Biomed Eng

    (2009)
  • Cited by (148)

    • The use of virtual reality and augmented reality in oral and maxillofacial surgery: A narrative review

      2024, Oral Surgery, Oral Medicine, Oral Pathology and Oral Radiology
    • Comparison of the Accuracy of Maxillary Positioning With Interim Splints Versus Patient-Specific Guides and Plates in Executing a Virtual Bimaxillary Surgical Plan

      2022, Journal of Oral and Maxillofacial Surgery
      Citation Excerpt :

      The primary predictor variable was the planned 3D position at these points, and the primary outcome variable was the postoperative 3D position of these points in both groups. The benefits of using a 3D virtual treatment plan to treat patients who have dentofacial deformities with orthognathic surgery have been discussed comprehensively.6,9,10,13-16 However, while there have been multiple studies and a systematic review that have attempted to quantify the accuracy of virtually planned surgeries in vivo via different methods as referenced above, there is still no consensus or large study comparing different methods to determine which is more accurate.

    View all citing articles on Scopus
    View full text