Back to Journals » Advances in Medical Education and Practice » Volume 16

“Good, Better, How” Educational Intervention: Potential Benefits of Utilizing Feedback in General Surgery; Sequential Mixed-Methods Study of an Educational Intervention

Authors Guadagnoli M, Elks W , Ely K, Cheng AW, Batra K , St Hill CR

Received 15 July 2024

Accepted for publication 11 February 2025

Published 12 March 2025 Volume 2025:16 Pages 381—398

DOI https://doi.org/10.2147/AMEP.S487038

Checked for plagiarism Yes

Review by Single anonymous peer review

Peer reviewer comments 2

Editor who approved publication: Dr Md Anwarul Azim Majumder



Mark Guadagnoli,1 Whitney Elks,2 Kencie Ely,2 Abigail W Cheng,2 Kavita Batra,3 Charles Randolph St Hill2

1Department of Medicine, Kirk Kerkorian School of Medicine at UNLV, Las Vegas, NV, USA; 2Department of Surgery, Kirk Kerkorian School of Medicine at UNLV, Las Vegas, NV, USA; 3Office of Research, Department of Medical Education, Kirk Kerkorian School of Medicine at UNLV, Las Vegas, NV, USA

Correspondence: Mark Guadagnoli, Department of Medicine, Kirk Kerkorian School of Medicine at UNLV, 1701 West Charleston Blvd, Suite 550, Las Vegas, NV, 89102, USA, Tel +1 702 895 4624, Email [email protected]

Background: Feedback is acknowledged as a necessity for effective learning and performance improvement. However, it has been shown to have variable effects on subsequent performance. This study introduces the “Good, Better, How” (GBH) framework for providing and receiving effective feedback in surgical training.
Methods: Surgery residents, fellows and faculty at a single institution completed pre- and post-educational intervention surveys, attended a GBH educational intervention, and participated in focus groups. Survey results were analyzed quantitatively and qualitatively.
Results: Survey analysis showed significant (p< 0.05) positive changes from using the GBH method, and rated the GBH method very favorably (average score: 8.03/10), suggesting a positive paradigm shift from previous feedback methods used in surgical education. Dominant focus group themes included phrases such as “positive culture”, “systematic”, and “useful”.
Conclusion: Despite implementation challenges, the GBH feedback system shows promise for enhancing surgical education and may contribute to improved patient outcomes.

Keywords: feedback, medical education, GME, surgery

Introduction

Feedback is an invaluable tool for learners, serving as a critical mechanism for tracking progress and enhancing development throughout their training journeys. Feedback’s role in learning has been well documented in various studies.1,2 However, the challenge lies not in recognizing the need for feedback but in optimizing its delivery.

In recent years, the landscape of feedback systems has evolved significantly, and their effectiveness has been scrutinized across various educational and professional settings.3–6 Methods such as “Ask-Tell-Ask” and “What, Why, How” have been suggested and utilized in the clinical setting; however, they exclude operative assessment, which is a core component of surgical training.4–6 Evaluating these systems requires a keen eye on several factors: how easily they can be implemented, the consistency among different evaluators, the effectiveness of the feedback, and perhaps most importantly, how the feedback is received by the trainees.

It is crucial that feedback systems are meticulously vetted and accompanied by thorough training for both instructors and learners. With this in mind, we suggest a feedback method known as “Good, Better, How” (GBH). GBH is a structured approach to feedback designed to maximize clarity and effectiveness, and improve the relationship between the giver and receiver of feedback. Trainees frequently express frustration over feedback that is either too vague, too negative, or overly positive, offering little room for meaningful improvement.7 Spontaneous, unstructured feedback often fails to provide learners with feedback in a manner to make tangible improvements. To mitigate these issues, we suggest the GBH method. By structuring feedback to include what was done well (Good), what could be improved (Better), and practical steps for how to improve (How), this method encourages comprehensive and actionable feedback. This method is favored, as it ensures that feedback is both specific and constructive, addressing trainees’ need for clarity and direction.

The GBH method, initially developed through high performance sports, aims to deliver performance feedback that combines positive reinforcement, constructive criticism, and practical guidance.8 The method consists of three components: “Good”, which acknowledges strengths and accomplishments; “Better”, which identifies areas needing improvement; and “How”, which provides practical strategies for growth. This approach can be tailored to various teaching environments, depending on the learner’s familiarity with the subject matter and the context.9

Whilst the GBH method is straightforward once familiar, the key to its successful implementation lies in thorough training and educational interventions These educational interventions not only ensure everyone is on the same page but also foster an environment where educators can exchange feedback strategies and practice delivering effective feedback.6,10 Additionally, these sessions help learners understand what to expect from feedback interactions and open a dialogue about best practices and expectations.

Ultimately, this structured approach to feedback, exemplified by the GBH method, serves as a model for enhancing the feedback process in educational settings. It underscores the importance of specific, actionable feedback and highlights the benefits of comprehensive training for both instructors and learners. As such, this study serves as a model for educational interventions that can provide guidance to educators and learners for how to best implement GBH into their daily practice.

Methods

Ethical Considerations

This was approved by Kirk Kerkorian School of Medicine at UNLV’s institutional review board (IRB# 2023–121) on May 2, 2023. The survey component of this study remained anonymous and participants were allowed to leave the survey at any time. Informed consent, including publication of anonymized responses and direct quotation, was provided on the first page of the survey and participants were allowed to either complete the survey or leave the study. Each participant was able to submit only one survey response. Participants were made aware that the focus groups were being recorded and were then transcribed. No participant names were used in the transcriptions.

Design

This was a mixed-methods study including surgery residents, fellows and faculty at a single institution. There were 2 groups; 1st-4th-year residents were categorized as “learners”. Fifth-year residents, fellows and faculty were categorized as “instructors”. Each group completed a pre-educational intervention survey, attended a two-hour instructional educational intervention regarding the GBH method, and then completed a post-educational intervention survey followed by a 20-minute focus group. Participants were excluded from analysis if they were unable to complete the surveys. Participants were recruited through Email via the department of surgery Email listserv as well as announcements during weekly general surgery conferences.

The survey instrument included 27 pre-educational intervention questions, 20 immediate post-educational intervention questions and 22 delayed (1-month) post-educational intervention questions. The survey was developed to assess the study participants’ perceptions of giving and receiving feedback. The survey tools are available in Survey Instruments 1 and 2. The survey instrument validation process is demonstrated in Figure 1. After the initial pool of questions was developed, the survey was reviewed by 3 subject matter experts (SMEs) in the field of medical education at our institution including one survey validation expert, one internal medicine attending and one surgery attending. Feedback from the SMEs was used to modify the survey questions (Phase I). After Phase I of the validation process, the survey instrument was then reviewed by 3 previous acute care surgery fellows at our institution who were not study participants (Phase II). The survey was created electronically using Qualtrics software and subsequently administered anonymously to the study participants using a QR code at the beginning and end of the educational intervention, and again 1-month post-educational intervention at the beginning of a regularly-scheduled surgical conference. No direct participant identifiers were collected. The survey consisted of questions regarding participant characteristics, pre-educational intervention perceptions of giving and receiving feedback, and post-educational intervention perceptions of giving and receiving feedback. The pre- and immediately post-educational intervention assessments remained matched as the participants were asked to simply “pause” their survey and return after the completion of the educational intervention to complete their responses.

Figure 1 Survey validation flow chart.

The study participants were divided into two groups: the “instructors” and the “learners”. Groups attended educational interventions on separate day. The instructor group included all faculty, fellows and chief residents. The learner group consisted of all 1st, 2nd, 3rd and 4th-year general surgery residents. The instructor session was held first. The focus groups were divided into 2 separate groups to promote discussion among similar training levels; Group 1 Instructors consisted of faculty while Group 2 Instructors consisted of fellows and chief residents. This all took place within a single session, approximately 2 hours long. Following the instructor educational intervention, the learner education intervention was held. The learners completed a very similar session; however, this session was partially facilitated by the instructor group. The learners were divided into 2 focus groups in order to promote discussion among similar training levels. Group 1 Learners consisted of 1st and 2nd year general surgery residents while Group 2 Learners consisted of 3rd and 4th year general surgery residents.

The focus groups were held immediately after completion of the post-educational intervention surveys. The discussions were regarding thoughts and opinions of the GBH method of feedback and were guided by the authors of this study (W.E., A.C., C.S., and M.G.) with open-ended questions to prompt conversation. The primary prompting questions were as follows: (1) Please list 3 things that can make feedback more effective in surgical training. Write them down and we will discuss. (2) Please list 3 barriers to giving effective daily feedback in surgical training. Write them down and we will discuss. (3) How do you think implementing the GBH method of providing feedback will change surgical training? (4) How do you think implementing GBH will change barriers to giving and receiving feedback in surgical training? These sessions were recorded on an audio recorder and later transcribed for qualitative analysis.

Analysis

Quantitative

First, univariate tests were conducted to describe the data. Categorical variables were represented as frequencies or percentages whereas continuous variables were presented as mean and standard deviation. Pre and post mean scores of survey items were compared using a paired-t-test, while categorical outcomes were compared using related-samples marginal homogeneity tests. Mean differences in the intention of “implementing the GBH method in future” among instructors and learners were calculated by independent-samples-t-test. A Two-Way Mixed ANOVA model was also used to establish whether there is an interaction between the between-subjects factor (eg, instructor vs learner) and within-subjects factor (eg, time points, pre and post) on mean scores of the survey items. A follow-up analysis after one month was also conducted in which the assumption of paired observations was not assumed given the lack of linkage between the previous phase of the survey and one-month follow-up. In the one-month analysis, independent samples-t-test was used to derive comparisons among instructors and learners. The significance level was set at 5% and the normal approximation to the binomial distribution method was used to calculate 95% confidence intervals of proportions in the univariate analyses. All analyses were conducted using SPSS version 28.

Qualitative

For analysis of qualitative data, this study followed grounded theory methods of inductive theoretical framework which is a systematic method of analyzing qualitative data using generation of theories using patterns that arise during focus groups.11,12 Qualitative analysis of the focus group discussion was completed by identifying emerging themes along with their corresponding subthemes. Transcriptions were coded by a single author (K.E). All codes were then cleaned and consolidated in a collaborative fashion between W.E. and K.E. Microsoft Excel was used to manage the data. Using an inductive theoretical framework, themes and subthemes were initially identified by K.E. and W.E. directly from the data. Representative quotations of each theme and subtheme were identified.13

Results

Sample Characteristics

A total of 57 subjects were present for the study educational interventions. Twenty-six general surgery residents, 7 SCC/ACS fellows and 23 surgery faculty participated in 1 of 2 sessions on August 8, 2023 and August 22, 2023. Forty-four participants completed the survey, representing 77.2% overall participation; 13 subjects did not complete the survey. In the study institution, the general surgery residents had 96.4% participation (n = 27), with 1 resident unable to attend due to emergent clinical duties. The SCC/ACS fellows had 100% participation (n = 7). The surgery faculty had 79.3% participation (n = 23), with 6 faculty missing due to clinical duties. Of the faculty who completed the survey, 13 (31%) had been attendings for 0–5 years, 2 (4.8%) for 6–10 years, 3 (7.1%) for 11–15 years, and 4 (9.5%) for 21+ years. Of the residents who completed the survey, 8 (22.2%) were at the PGY-1 level, 4 (11.1%) at PGY-2, 1 (2.9%) at PGY-3, 1 (2.9%) at PGY-4 and 4 (9.5%) at PGY-5. Twenty-eight (63.6%) participants were male, 12 (27.3%) were female and 4 (9.1%) chose not to disclose gender. No participants were Hispanic/Latinx, 36 (81.8%) were non-Hispanic/Latinx and 8 (18.2) chose not to disclose ethnicity. There were 31 (70.5%) participants that reported white race, 2 (4.6%) were black or African American, 6 (13.5%) were Asian, and 5 (11.4%) chose not to disclose race. Prior to this study, 15 (34.1%) subjects had exposure to the GBH method of feedback in the past, while the other 29 (65.1%) had not (Table 1).

Table 1 Characteristics of the Sample (N=44)

Skill Level

With regards to the pre-post comparison of the skill level, results of related samples marginal homogeneity test indicated a significant increase in the proportion of individuals with the proficient skill level (40.9% vs 72.7%, p=0.011, Figure 2). In the 1-month post-intervention survey, there was an increase in the proportion of individuals with the expert skill level (Figure 3). Further 1-month post-GBH results may be found in Tables 2 and 3.

Table 2 Comparing Mean Scores of Satisfactions, Importance, and Agreement Levels Among Instructors and Learners in a Post-One- month Survey (N=37)

Table 3 Comparing Instructors and Learners’ Responses in a Post-One-month Survey (N=37)

Figure 2 Comparing skill levels at pre and post intervention.

Figure 3 Comparing skill levels at pre, immediately post, and 1-month post-educational intervention.

Satisfaction

A statistically significant increase in the proportion of respondents who reported being “very satisfied” with the amount of instruction given on how to provide the feedback (11.4% vs 38.6%, p<0.001, Figure 4). Also, as indicated in Figure 5, a statistically significant increase in the proportion of respondents who reported being “satisfied or very satisfied [combined]” with the amount of feedback provided (36.3% vs 54.5%, p=0.02). No statistically significant differences in the proportion of the respondents reported being satisfied with the amount of instruction provided (p>0.05, Figure 6).

Figure 4 Comparing satisfaction levels of instruction on how to provide feedback at pre and post intervention.

Figure 5 Comparing satisfaction level with the feedback received at pre and post intervention.

Figure 6 Comparing “How satisfied are you with feedback you provided?” at pre and post intervention.

Upon comparing the mean scores of items related to the satisfaction, the mean score of the satisfaction with the amount of instruction on how to provide feedback was increased in the post GBH with a statistically significant mean difference (3.42±0.932 vs 4.16±0.843, p<0.001, Table 4). Similarly, the mean score of the satisfaction with the amount of feedback received was increased in the post GBH with a statistically significant mean difference (3.18±0.947 vs 3.57±0.925, p=0.008, Table 4). There were no statistically significant differences noted in the pre/post mean scores of the satisfaction with the amount of feedback provided from the instructors’ perspective (Table 4). To see whether there is an interaction between the between-subjects factor (eg, instructor vs learner) and within-subjects factor (eg, time points, pre and post) on mean scores of the survey items, it was noted that the main effect of time showed a statistically significant difference in mean scores of satisfaction with the amount of instruction provided on how to provide the feedback at the different time points (Pre and Post), F(1, 41) = 19.18, p < 0.001, partial η2 = 0.319, Table 5. However, the main effect of group showed that there was not a statistically significant difference in mean satisfaction score with the amount of instruction provided on how to provide the feedback at the different time points between groups (Instructor and Learner) F (1, 41) = 3.00, p = 0.09, partial η2 = 0.068, Table 5). Also, the main effect of time (pre vs post) showed no statistically significant difference in mean satisfaction score with the amount of feedback provided at the different time points (Pre and Post), F (1, 42) = 1.76, p = 0.19, partial η2 = 0.040. Similarly, the main effect of group (instructor vs learner) showed no statistically significant difference in mean satisfaction score with the amount of feedback provided F(1, 42) = 0.75, p = 0.39, partial η2 = 0.018, Table 5. The main effect of time showed a statistically significant difference in mean satisfaction on the amount of feedback received at the different time points (Pre and Post), F (1,42) = 5.09, p =0.03, partial η2 = 0.108, Table 5. The main effect of group showed no statistically significant difference in mean satisfaction on the amount of feedback received between groups (Instructor and Learner) F (1, 42) = 0.33, p = 0.57, partial η2 = 0.008, Table 5).

Table 4 Comparing Mean Scores of Satisfaction Construct (N=44)

Table 5 Comparing Between-Subjects’ and Within-Subjects’ Factors for the Satisfaction Construct (N=44)

Actionable Goals

As shown through the pre/post comparisons (Supplementary Table 1), the mean score of the possibility of creating actionable learning goals with the feedback provider was increased in the post GBH with a statistically significant mean difference (3.34±0.75 vs 3.77±0.80, p=0.002). The main effect of time showed a statistically significant difference in mean scores of the item “When you have been provided feedback in the past, you created actionable learning goals with the feedback provider during the interaction” with at the different time points (Pre and Post), F (1, 42) = 13.00, p =0.001, Table 6.

Table 6 Comparing Between-Subjects’ and Within-Subjects’ Factors for the Other Survey Items Used in the GBH Method (N=44)

Feedback and Implementation of the GBH Method

Upon comparing the mean scores of how important is receiving feedback in surgical training, no statistically significant differences were found (p>0.05, Table 7). Likewise, mean scores of if the feedback received was overall encouraging was not statistically significant (Table 7). There was no statistically significant difference in mean importance in receiving feedback in surgical training at the different time points (Pre and Post), F (1, 42) = 1.13, p =0.29, partial η2 = 0.026 and between groups (Instructor and Learner) F (1, 42) = 0.45, p = 0.51, partial η2 = 0.011 (Table 8). The main effect of time showed no statistically significant difference in mean score on the item if the overall feedback was encouraging at the different time points (Pre and Post), F (1, 42) = 2.58, p =0.12, partial η2 = 0.058. The main effect of group showed that there was no statistically significant difference in mean score on the item if the overall feedback was encouraging between groups (Instructor and Learner) F (1, 42) = 0.70, p = 0.41, partial η2 = 0.016, Table 8. There were no statistically significant differences found with regards to main effects of the groups and main effects of the time on remaining survey items (Table 6). On the question where we asked about the rating on a scale of 0–10 about the GBH method, only n=34 (77.3%) participants responded and the overall mean was 8.03±2.209. Appendices A.1-A.11 with tables of insignificant findings can be found in Supplementary Materials.

Table 7 Comparing Mean Scores of How Important Is Receiving Feedback in Surgical Training and if Overall Feedback Is Encouraging

Table 8 With-in Subjects and Between Subjects’ Differences on the Importance of Receiving Feedback in Surgical Training and if the Feedback Is Overall Encouraging (N=44)

Focus groups were conducted with a duration of approximately 20 minutes and were divided into groups of participants at similar levels of surgical training. After all transcripts were coded, the codebook contained 166 unique excerpts, of which 68 originated from Group 1 Instructors, 36 from Group 2 Instructors, 21 from Group 1 Learners, and 41 from Group 2 Learners. A total of 5 major themes and 38 sub themes emerged from the data, describing the perceptions of feedback. Themes encompassed Benefits, Challenges, Requirements, Challenges with Traditional Feedback, and Educational Intervention Feedback. Themes were not assigned a rank order of importance. Representative quotes were gathered from the 1-month follow up focus group session. Table 9 represents the dominant themes and subthemes that were discussed. Refer to Supplementary Materials for the complete list of themes and subthemes.

Table 9 Major Themes and Subthemes With Representative Quotations From Focus Groups

Themes

Theme 1 was labeled “Benefits”, and included the subthemes of engaging conversation, personal responsibility/accountability/commitment, positive culture, culture of improvement, sets expectations/predictable, and systematic/ specific/useful. Theme 2 was labeled “Challenges” and included the subthemes of feedback fatigue, insight incongruity, learning curve, resistance to change, and time consuming. Theme 3 was labeled “Requirements” and included the subthemes of buy in, requires discipline, and requires mutual respect. Theme 4 was labeled “Challenges with Traditional Feedback” and included the subthemes of inconsistency and vague. Theme 5 was labeled “Educational Intervention Feedback” and included the subthemes of scenario incompleteness and survey length. The most dominant themes discussed in focus groups were “Positive Culture” and “Systematic, Specific, Useful”.

Discussion

The current study explored the perceptions of feedback among surgical trainers and trainees and proposed a solution through the implementation of the “Good, Better, How” (GBH) feedback method. This method is adaptable, intuitive, and simple, making it easy for both instructors and learners to understand and adopt. Its attributes align well with the values ascribed to contemporary learners, such as Millennials and Generation Z. When used correctly, the GBH method:

  • Encourages positive behaviors, both current and new;
  • Discourages negative behaviors;
  • Focuses on growth and forward progress;
  • Is given without judgment.

The GBH method incorporates elements of previously published feedback approaches that direct attention to objective behavioral actions and reflection on improvements. It has since evolved and been adapted into three semi-distinct levels: teacher-oriented GBH, team-oriented GBH, and learner-oriented GBH.9 (Guadagnoli et al, 2010). In this study, participants received clear instructions on applying the GBH method in surgical training through a workshop.

Significant Results and Focus Group Themes

Perceptions on Giving Feedback

The study highlighted several key findings regarding feedback perceptions. Nearly 100% of surgical trainees are contemporary learners.14 Older generations have, in the past, been labeled contemporary learners as narcissistic, having short attention spans, being glued to their phones, and expecting daily praise while lacking an understanding of roles and boundaries.15,16 On the contrary, contemporary learners are innovative, technologically savvy, and prioritize quality over quantity. They are also the most diverse generation in the country.17 Contemporary learners are described as confident, extremely reliant on technology, ambitious, multitaskers, achievement-oriented, and valuing work-life balance. Contemporary learners have also shown preference for nontraditional teaching methods, a desire for constant rewards, and adopting technology at the highest levels.18

Generational differences between educators and learners are evident. Ebeling et al16 suggest that surgical attendings should “go into their space, meet them where they are”, implying that faculty or senior residents should communicate according to the learner’s preferences, which may include digital messaging and social media platforms. Educators must embrace the strengths of their learners to develop a better framework for encouraging motivation and learning. The GBH feedback method can be a potent tool for this when used correctly.

Faculty and residents are both significant sources of education and feedback. In a study by De et al19 almost all faculty and residents wanted medical students on the service (>95%), and faculty believed residents did a better job teaching than either the students or residents themselves did (p < 0.001). Additionally, students considered residents as the primary source of education in patient care. This highlights the importance of teaching both faculty and residents to utilize the GBH method to provide feedback, offering a multi-tiered approach to globally improving the educational experience in surgery and ultimately enhancing patient care.

Perceptions on Receiving Feedback

The study also delved into the trainees’ perceptions of receiving feedback. Surgical trainees expressed a preference for attendings to act as coaches rather than just bosses. Ebeling et al16 drew lessons from the military, emphasizing the need to recognize ambition, set expectations, communicate on their level, and give them room to innovate.

Surgical training historically followed the Socratic method, where knowledge was transferred from mentor to student via dialogue. With the change of cultural climate, this method can be seen as “too confrontational” or “bullying”.20 Contemporary learners learn differently, being intrinsically motivated and favoring more autonomy. They prefer short, frequent, non-offensive, and structured feedback.18 Contemporary learners are more focused on the quality of their work rather than the hours spent, desire close relationships with authority figures, and appreciate flexible structures and teamwork.21 These characteristics underscore the need for a feedback approach like GBH, which is quick, structured, and encourages active participation and autonomy.

Medical students often desired more hours of instruction, believed they performed fewer procedures per week, and considered the feedback poor compared to the opinions of faculty and residents (p < 0.002). Nearly 50% of medical students felt they were an inconvenience to the service; 30% of house officers and 27% of faculty shared this sentiment.19

Effectiveness of the GBH Workshop in Surgery

Given the new era of easy access to information through technology, educators are no longer the sole sources of education. Traditional educators now have a greater role in facilitating and guiding contemporary learners through this abundance of information. Feedback has proven to be an effective strategy to accomplish this, but educators sometimes need guidance on how to give feedback successfully.22 Our study demonstrated that the GBH workshop was effective in surgery by encouraging shared discussions on ways to improve, rather than taking a Socratic approach. The GBH method promotes active participation and autonomy by providing responsibility, which has been shown to stimulate learning and motivation in contemporary learners.18 Additionally, this method is efficient and can be executed within minutes, even on busy surgical services.

Providing feedback is a skill that can be improved with practice, and workshops have been shown to enhance faculty feedback skills.23,24 Institutional leaders have a unique opportunity to change the culture and emphasize that giving frequent, high-quality feedback should be a priority by continuing to schedule workshops.6 This study serves as a model for workshops that can provide guidance on giving effective feedback using the GBH method in surgical training. Continuing to schedule workshops annually as new interns join the program can help shape the culture of surgery to fit the educational needs of the younger generations. Future research should follow up on the cohort in this study to evaluate the impact of implementing the GBH method of feedback at this institution.

Strengths and Limitations

The study has several notable strengths. First, it provides a clear and focused exploration of participants’ perceptions of feedback, effectively capturing shifts in attitudes toward giving and receiving feedback over time through a pretest-intervention-posttest design. This approach is valuable for understanding how individuals’ perceptions of feedback evolve, which is an essential foundation for potential future behavior change. Additionally, the use of a mixed-methods approach that integrates both quantitative and qualitative data strengthens the study, offering a comprehensive understanding of the phenomenon. The qualitative data enriches the numerical findings by providing deeper insights into participants’ experiences and their understanding of feedback.

However, there are some limitations to consider. While the study effectively tracks changes in perceptions, it does not directly assess how these changes translate into future behavior, such as whether participants will implement feedback practices in a timely or non-judgmental manner. This limits the ability to draw clear connections between perceptual changes and actual behavior. Furthermore, the pretest-intervention-posttest design is cross-sectional, which makes it difficult to determine whether the shifts in perceptions observed are sustained over time or simply temporary. A more longitudinal approach would help to address this issue. Additionally, the study does not track participants’ actual feedback behaviors post-intervention, which would provide a more complete picture of the intervention’s impact.

Incorporating a longitudinal design would allow for the tracking of long-term behavior changes, offering a clearer understanding of whether changes in perceptions persist over time and influence real-world actions. Additionally, utilizing randomized controlled trials (RCTs) could directly assess the effect of various feedback interventions on actual behavior, providing stronger evidence of their impact. Expanding the focus to include behavioral outcomes, such as self-reported feedback practices or peer evaluations, would help link changes in perceptions to tangible actions. Finally, future studies could benefit from exploring contextual factors that influence feedback practices, such as organizational culture or peer dynamics, to tailor interventions for more effective behavior change across different settings.

Another limitation of the study is the potential for self-reported data bias, which is common in studies relying on participants’ subjective perceptions. Participants may have been inclined to provide socially desirable responses, particularly when asked about their satisfaction with or attitudes toward feedback practices, leading to overestimations of their satisfaction or openness. While the study aimed to capture participants’ genuine perceptions, the self-reported data may not fully reflect their actual behaviors or experiences. Future studies could be planned to reduce social desirability bias by using indirect questioning techniques or validated scales that account for response biases. Additionally, combining self-reported data with objective measures, such as direct observations or peer evaluations, would provide a more accurate representation of participants’ actual feedback practices. Follow-up surveys or interviews could also be incorporated to assess whether changes in perceptions are sustained over time, offering a clearer connection between perceived attitudes and real-world behavior. Lastly, we acknowledge that social desirability bias may be present in this study, given the nature of the study design.

Conclusions

The current study investigated the efficacy of a feedback method known as “Good, Better, How” via educational intervention. Concisely in both the qualitative data and quantitative data, instructors and learners rated the method favorably and felt that it was a positive paradigm shift from previous/traditional feedback methods used in surgical education. Like most methods, there are considerations and limitations to implementing a new feedback system. However, considering the positive efficacy shown in this study, it appears that the GBH feedback system is well received and has the potential to significantly improve medical education and potentially contribute to improved patient outcomes.

Acknowledgments

We would like to convey our gratitude to the subject matter experts that offered their valuable time and efforts to provide suggestions during the survey validation process. We would also like to thank the team at UNLV, Jenny Shin and Nicholas Mastroluca for their contributions to the media involved in our study.

Funding

This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors.

Disclosure

The authors report no conflicts of interest in this work.

References

1. Guadagnoli MA, Holcomb WR, Weber TJ. The influence of knowledge of results on practice scheduling effects. Human Movement Sci. 2001;20(3):201–213.

2. Bailey R, Fletcher M, Anderson C. Effective feedback: principles and practice. Medical Education. 2011;45(6):590–598.

3. Davis J, Roach C, Elliott C, Mardis M, Justice EM, Riesenberg LA. Feedback and assessment tools for handoffs: a systematic review. J Grad Med Educ. 2017;9(1):18–32. doi:10.4300/JGME-D-16-00168.1

4. French JC, Colbert CY, Pien LC, Dannefer EF, Taylor CA. Targeted feedback in the milestones era: utilization of the ask-tell-ask feedback model to promote reflection and self-assessment. J Surg Educ. 2015;72(6):e274–9. doi:10.1016/j.jsurg.2015.05.016

5. Kogan JR, Holmboe ES, Hauer KE. Tools for direct observation and assessment of clinical skills of medical trainees: a systematic review. JAMA. 2009;302(12):1316–1326. doi:10.1001/jama.2009.1365

6. Ramani S, Krackov SK, Connell KJ, Ten Cate O. Feedback in medical education: what, why, and how. Med Teach. 2019;41(1):36–46. doi:10.1080/0142159X.2018.1436760

7. MacNiel P, Tallentire VR, Foley F. Improving feedback in medical education. Med Teach. 2015;37(6):558–563. doi:10.3109/0142159X.2014.955842

8. Nilsson P, Marriott L. Every shot must have a purpose: how GOLF54 can make you a better player. Gotham. 2007.

9. Guadagnoli MA. Feedback and performance: a review of the literature. Int J Sport Psychol. 2010;41(1):33–55.

10. Cantillon P, Sargeant J. Giving feedback in clinical settings. BMJ. 2008;337(nov10 2):a1961. doi:10.1136/bmj.a1961

11. Tavakol M, Torabi S, Akbar Zeinaloo A. Grounded theory in medical education research. Med Educ Online. 2006;11(1):4607. doi:10.3402/meo.v11i.4607

12. Stern PN. Grounded theory methodology: its uses and processes. Image (IN). 1980;12(1):20–23. doi:10.1111/j.1547-5069.1980.tb01455.x

13. Collings AT, Doster DL, Longtin K, Choi J, Torbeck L, Stefanidis D. Surgical resident perspectives on the preferred qualities of effective intraoperative teachers: a qualitative analysis. Acad Med. 2023;98(5):629–635. doi:10.1097/ACM.0000000000005131

14. Boysen PG. Millennials and the future of surgery: a generational analysis. J Surgical Educ. 2017;74(2):343–350.

15. Dilullo C, Pattison T, Sherwood M. Understanding the Millennials. Med Teach. 2011;33(1):1–3. doi:10.3109/0142159X.2010.519412

16. Ebeling C. Engaging Millennials: lessons from the military. Harvard Business Rev. 2012;90(11):91–97.

17. U.S. Census Bureau. Millennials outnumber Baby Boomers and are far more diverse. Census Bureau Reports, 2015. Available from: https://www.census.gov/newsroom/press-releases/2015/cb15-113.html. Accessed February 25, 2025.

18. Chaudhuri E. Teaching strategies for Millennials and Generation Z. Educational Rev. 2019;71(2):153–167.

19. De S, Ruparel RK, Graham E. The role of residents in surgical education: a study of perceptions. J Surgical Educ. 2015;72(4):674–682. doi:10.1016/j.jsurg.2015.01.008

20. Zhar R. Closing the generational gap in surgery: why so angry? Plastic Reconst Surg Global Open. 2016;4(10):e1087. doi:10.1097/GOX.0000000000001087

21. Quarles J. Millennials in the workplace: a study of work-life balance. J Business Psychol. 2015;30(3):423–438.

22. Coyne I. Enhancing feedback to improve learning outcomes. Med Teach. 2013;35(11):e1578–e1585.

23. Brukner H. Enhancing feedback in clinical education. Med Teach. 2014;36(7):595–600.

24. Krackov SK. The impact of workshops on faculty feedback skills. Med Teach. 2015;37(3):276–282.

Creative Commons License © 2025 The Author(s). This work is published and licensed by Dove Medical Press Limited. The full terms of this license are available at https://www.dovepress.com/terms.php and incorporate the Creative Commons Attribution - Non Commercial (unported, 3.0) License. By accessing the work you hereby accept the Terms. Non-commercial uses of the work are permitted without any further permission from Dove Medical Press Limited, provided the work is properly attributed. For permission for commercial use of this work, please see paragraphs 4.2 and 5 of our Terms.