Surgical training courses vary in their content to acquire necessary surgical skills. The variation of training courses and models used leads us to question the efficiency by which these models succeed in delivering surgical skill necessary, which can in turn be transferred to the actual clinical setting. Validation of various surgical training models is without doubt an important stage for identifying, comparing, and consequently selecting a reliable model for its justified use in surgical training, ensuring proper acquisition of skill necessary, with minimal loss of resources and maximal avoidance of risk to patients. In this project, an attempt was made to validate the live pig model for surgical training in the field of reconstructive microsurgery.
Face validity
In this study, the evaluation forms’ responses (n = 27) has supported the live pig model as well, approving that it has led to the improvement of their skill, that these skills would be used in the real setting, and that it is recommended for surgical trainees, all by 100% of responses varying between strong and moderate agreement. The approval denotes the significance of this model in their hands on experience whether in the trainee or control group.
Despite the high approval rate of the flap model, there was only a 50% agreement on recommending courses with the live pig training model for trainees for learning of other flaps or general plastic surgery skills. These responses may be explained by varying trainees’ seniority and operative experience. Early exposure builds confidence and may develop a more profound clinical interest, a better learning experience, and a more enhanced learning curve [
25].
Responses on timing of introducing the model into the surgical curriculum varied at the undergraduate stage or during early core surgical training. In contrast, most respondents agreed that it should be part of a higher surgical training, an opinion warranted by the fact that this type of procedures are usually carried out at higher levels of training in any case, and therefore these skills may be unnecessary to acquire at earlier stages.
Content validity
Most participants agreed that the live pig model is excellent for preparation and maintenance of skills involved in the LD flap operation and that the live pig model adequately presents the tissue and pedicle dissection skills involved in raising the LD flap. The agreement however was understandably impeded by the anatomical factor; the difference in anatomy rendered the model less accepted for representation of flap marking and design skill.
Publications vary in their consideration of content validity through participants’ responses in regard to skill level: Some include both experts and trainees [
26‐
28], while others only consider the opinions of experts [
18,
29,
30]. In this validation, the discrimination between expert and trainee was only made for 6 participants by questionnaire. As the total number of participants (
n = 27) including both experts and trainees provided responses in favor of the live pig model in terms of face and content validity, discrimination between levels of expertise was considered unnecessary. Those few fully assessed at the workshop in the “free flap dissection workshop” in Timisoara were supported by the opinions of attendees at the “Advanced Hands on Course in Microsurgical Breast Reconstruction” in Athens.
The main and relatively small number of participants (n = 6) who were fully assessed at the workshop in the “free flap dissection workshop” in Timisoara was strengthened by adding data from the same tools of the assessment of face and content validities from the “Advance Hands on Course in Microsurgical Breast Reconstruction” in Athens; the combined results had aided in strengthening the validation and eliminating influencing factors that may be related to the workshops themselves from the validation process of the live pig training model. Overall, the model showed promising results of face and content validity.
Construct validity
Despite the small sample size of this study, the model allowed differential demonstration of competence between the expert and trainee groups as measured by procedure-specific checklist, flap physiological outcome (scratch test and fluorescence imaging), and objective hand motion analysis. However, this discrimination was not statistically significant on all parameters measured.
The procedure-specific checklist and global rating scales were developed by the expert panel in this study and are likely to prove most valuable as a training feedback tool. However, neither the check list nor the global rating scale was subjected to any validation research before their use in this procedure assessment. Many surgical assessment tools have been designed and introduced in the field of reconstructive microsurgery for the same procedure by different research groups [
31]. The standardization of surgical assessment tools by means of comparative performance and consensus methodologies [
32] may help limit variables when it comes to data collection, thus allowing an accurate compilation of skill acquisition data for future validation of training models.
Physiological flap perfusion parameters were used to evaluate performance between the two groups. Various factors may affect viability outcome in any flap surgery. The general condition of the animal and tissue handling during the procedure play an important part in the surgical outcome. Despite the high level of success of modern free flap surgery, reaching up to 95% success rate [
33], free flap failure continues to be a serious complication that should be diagnosed and addressed early during and/or postoperatively. Out of the two methods used to assess flap viability, the fluorescence imaging provided direct visualization of flap perfusion and is therefore more objective than the scratch test. This assessment method, although was not of great value statistically, provided a very important feedback on individual and team performance.
Hand motion analysis (HMA) is an objective assessment tool of surgical skills that involves the tracking of one’s hand movements while performing a standardized task, using measures gained from this tracking, to assess competence and acquisition of microsurgical skill [
34,
35].
In this study, there was a notable difference between expert and trainee groups on the pre-test attempt in all three parameters of HMA measurements, namely, time, hand movements, and path length. Despite the fact that only average time was statistically significant, this objective discrimination shows promising results for the purpose of this training model’s construct validity for future research.
There is no consensus on the ideal placement for the digital sensors for HMA [
36], and standardization is likely to provide more consistency. There was some limited electromagnetic interference with the HMA software when the transmitter was in close proximity to the electrocautery device.
Simultaneous filming of the procedures allows real-time or subsequent expert assessment with a rating score (PSRS). In this instance, the expert rater was present at the workshop, so ratings were not absolutely blinded. Nevertheless, the complexity of whole procedure simulation provided by this model presents an extremely rich opportunity for assessment of skills. The procedure length, instrument handling, tissue handling, and pedicle handling all provide a spectrum of competencies that are easy to discern by a blinded assessor despite the potential bias of procedure’s audio and video footage that might make it easy to identify a trainee from an expert.
The eagerness of the trainee group to improve their performance both established a promising level of construct validity of the model and exposed a relative fall in performance in the expert control group. This could be explained by an element of overconfidence. The expert group included experienced specialists who had performed the procedure numerous times in the clinical setting. Any lack of interest in a perfect post-test attempt is more striking following flawless pre-test attempts. Indeed, surgical training workshops also offer a platform for altering the confidence and attitudes of surgeons. Through training, overconfidence can be reduced [
37,
38].