Back to Journals » Advances in Medical Education and Practice » Volume 16
Correlation of GPA with Final MBBS Examination Scores Among Students on Three Campuses of the University of the West Indies
Authors Kumar A , Krishnamurthy K , Campbell MH , Connell KL , Lashley PM, Motilal S, Majumder MAA
Received 6 February 2025
Accepted for publication 14 May 2025
Published 23 May 2025 Volume 2025:16 Pages 891—901
DOI https://doi.org/10.2147/AMEP.S515141
Checked for plagiarism Yes
Review by Single anonymous peer review
Peer reviewer comments 2
Editor who approved publication: Dr Sateesh B Arja
Alok Kumar,1 Kandamaran Krishnamurthy,1 Michael H Campbell,1 Kenneth L Connell,1 Paula Michele Lashley,1 Shastri Motilal,2 Md Anwarul Azim Majumder1
1Faculty of Medical Sciences, The University of the West Indies, Cave Hill Campus, Bridgetown, Barbados; 2Faculty of Medical Sciences, The University of the West Indies, St Augustine Campus, St Augustine, Trinidad and Tobago
Correspondence: Md Anwarul Azim Majumder, Faculty of Medical Sciences, The University of the West Indies, Cave Hill Campus, Bridgetown, Barbados, Email [email protected]
Objective: To evaluate the correlation between in-course grade point average (GPA) and exit examination scores in a five-year undergraduate medical program in a Small Island Developing State setting.
Methods: A retrospective observational study was conducted in the Faculty of Medical Sciences at three campuses of a single multinational university, involving 470 students. Pearson correlation coefficients were used to measure the strength of association between GPA and scores on various components of the final examination, as well as to determine the predictive value of GPA for overall performance on the examination.
Results: GPA showed a strong positive correlation (> 0.7) with the written Medicine and Therapeutics exam scores in cohort 1, and a moderate positive correlation (0.5– 0.7) in cohorts 2 and 3. Written Obstetrics and Gynecology scores were moderately positively (0.5– 0.7) correlated across all cohorts. For written Surgery exams, the correlation was moderately positive in cohorts 1 and 3 but weak (< 0.5) in cohort 2. GPA and scores from the Objective Structured Clinical Examination (OSCE) component of Medicine and Therapeutics exam showed moderate positive correlation in all three student cohorts; the Obstetrics and Gynecology OSCE showed moderately positive correlation with GPA in cohorts 1 and 3 and weakly so in cohort 2; the Surgery OSCE showed moderate positive correlation with GPA in cohorts 1 and 3 and weak positive correlation in cohort 2. GPA was strongly correlated with the total score on the final MBBS examination.
Conclusion: Although the degree of correlation between the GPA and scores on the different components of the final MBBS examination varied, there was a strong correlation between GPA and total score on the final examination. Our findings suggest that further discussion of the purposes and design of course-based and final examinations is needed.
Keywords: educational measurement, cross-sectional studies, GPA, academic performance, correlation, final MBBS examination, medical students
Introduction
The Bachelor of Medicine and Bachelor of Surgery (MBBS) program of the University of the West Indies (UWI) is a five-year program consisting of preclinical courses and clinical clerkships.1 The program is accredited by the Caribbean Accreditation Authority for Medicine and other Health Professions (CAAM-HP) and is recognized by the National Committee of Foreign Medical Education and Accreditation (NCFMEA) as comparable to US Medical Schools. Phase 1 (6 semesters) comprises a series of integrated courses (eg, Human Anatomy, Biochemistry Physiology, Public Health, Behavioral Medicine, Pathology, Microbiology and Pharmacology) and includes early exposure to patients and basic clinical skills. Phase II (4 semesters) spans 24 months and includes clerkships in Anesthesia, Child Health; Community Health; Emergency Medicine; Internal Medicine (including Dermatology and Venereology); Microbiology; Obstetrics and Gynecology; Pathology; Psychiatry; Radiology; and Surgery (including Ophthalmology, Orthopedics, and Otorhinolaryngology). Both phases are comprised of courses or clerkships including lectures, conferences, seminars, tutorials, self-study, use of information technology, practical exposure, and demonstrations including clinical bedside teaching. Progress in each course or clerkship is assessed based on performance in a combination of in-course assignments and written, practical, clinical, and oral examinations.
![]() |
Table 1 Detailed Marking System of FMS, UWI |
![]() |
Table 2 Score and Quality Point Equivalents for Letter Grades at The UWI |
![]() |
Table 3 Gender and Academic Performance of Students Sitting the 2019 Final MBBS Examinations (n = 470) |
![]() |
Table 4 Failure and Honors Rates by GPA Band (n = 470) |
![]() |
Table 5 Pearson’s Correlation Coefficients of GPA and Final MBBS Examination Component Scores (n = 470) |
![]() |
Table 6 Predictive Values of GPA for MBBS Final Examination Failures (n = 470) |
The final MBBS examination consists of three component examinations: Medicine and Therapeutics (includes community health, child health and psychiatry components) (MnT, 400 marks), Obstetrics and Gynecology (OnG; 200 marks), and Surgery (200 marks). Each component has written and clinical subcomponents. The details of the marking system are shown in Table 1. The raw scores from the written examination are standardized using the Modified Angoff method.2 The Modified Angoff method is widely accepted and a reliable method for standard setting in high-stakes exams. This method establishes a cut score based on the content of the exam and the required level of competency, rather than relying on the performance of a specific group of examinees.
Because the MBBS degree is required licensure to practice medicine in the Caribbean region, a pass result of the final examination is one of the requirements for medical licensure. This is the most common assessment model for MBBS programs globally and in nearly one-third of US medical schools.3 Ultimately, the exit exam is the most consequential gauge of student success in medical school. However, a limitation of this assessment model is the tendency for students to cram for single high-stakes examinations, which may encourage substituting superficial knowledge for reflective learning.4 The GPA assessment model, in contrast, elevates the importance of formative assessments that offer opportunities to provide timely feedback to students, which drives learning through assessment.5,6 Additionally, the use of multiple observations and varied assessment methods over time compensates to some degree for flaws in any one method.6–8 A strong relationship of exit examination scores with GPA validates for both assessment methods but also raises questions about the role of exit examinations in providing additional information to improve student assessment. There are very few studies on the correlation between GPA and exit or licensing exam scores.9–11
This regionally inclusive study of three large cohorts from each campus country in the MBBS program examined the correlation between MBBS exit examination scores and cumulative GPA in the five-year medicine course. We hypothesized that the grades from the exit examination would correlate strongly with the GPA. This correlation would 1) establish GPA as a useful tool to identify weak students needing early intervention and 2) inform discussions regarding the utility of a single high stakes exit exam after continuous assessment throughout the curriculum.
Materials and Methods
Design
This was an observational retrospective cohort study of students in the regional MBBS program offered in three campus countries (cohorts 1–3) of the University of the West Indies (UWI). Data were sourced from transcript and examination records.
Settings
Study settings were the three landed campuses of UWI: Barbados, Jamaica, and Trinidad & Tobago. Because the MBBS program is harmonized, each campus offers the same MBBS curriculum, and the same final examinations are given at all campuses.
Participants
Participants included 470 final year students across the Caribbean who completed final examinations in June 2019. Students who participated in this examination as their second or third attempt due to previous failure were excluded from this study. Assessment data for students who did not appear in one or more components of the examination due to reasons such as sickness were also excluded.
Methods
The University uses an established scheme to convert letter grades to quality points, which are then used to compute GPA (Table 2). The final degree GPA includes all core MBBS courses and clerkships from Years 1 to 5.
We extracted data from official examination results for June 2019; these included final examination scores as well as the cumulative GPA. Exam scores are tabulated, analyzed and stored in Microsoft Excel data sheets. Date for this study were extracted from this exam data bank. GPA were extracted from the Banner which is used for the students’ progress throughout this MBBS course. Data were de-identified by removing names and identification numbers from the data set for analysis.
Analytic Strategy
The primary outcome of interest was the strength of correlation between GPA and final exam. Data met normality assumptions. DATAtab was used to test for the normal distribution of the data set used for this study. Differences in the mean total grades for all six components of the final examination were analyzed using one-way ANOVA for independent samples. Similarly, differences in mean GPA by country were analyzed using one-way ANOVA for independent samples. We then calculated Pearson correlation coefficients to measure the strength of association between GPA and final examination scores. The predictive value of the GPA for final exam failure was also calculated.
Ethics Approval and Confidentiality
The study protocol received approval from The University of the West Indies, Barbados Ministry of Health Research Ethics Committee/Institutional Review Board (IRB: 200707-B). Permission to utilize examination grades for this study was granted by the Campus Registrar’s Office and University examiner for the Barbados, Jamaica, Trinidad and Tobago, and The Bahama campuses. All data collected for this study remained anonymous and confidential. The study was conducted in accordance with the principles set forth in the Declaration of Helsinki.
Results
Including all campuses, 470 students sat the June 2019 final MBBS examination. Gender, GPA and exam performance of students are summarized in Table 3. Final examination results are summarized by exam component in Figure 1. There were no differences mean total examination scores by campus (cohort 1 = 472.23 + 37.21; cohort 2 = 468.01 ± 35.57; cohort 3 = 479.14 ± 36.44) (F = 1.78; P = 0.1698). For cohort 2, the mean score on OnG was significantly lower than the other campuses (F = 11.84; P < 0.0001; HSD [0.05] = 4.14; HSD [0.01] = 5.15M1 vs M2 P < 0.05M1 vs M3 nonsignificant M2 vs M3 P < 0.01). No other significant difference was noted. The GPA distribution for exam candidates is shown in Figure 2. For campus 1, 38.49% had a GPA ≥ 3; 82.28% and 57.54% of students in campuses 2 and 3 earned GPAs ≥3, respectively. The difference in mean GPA by campus was statistically significant (F = 93.09; P <0.0001; HSD [0.05] = 0.13; HSD [0.01] = 0.16M1 vs M2 P < 0.01M1 vs M3 P < 0.01M2 vs M3 P < 0.01).
![]() |
Figure 1 Overall results from the final MBBS examination of the June 2019 at the UWI. |
![]() |
Figure 2 GPA distribution of 470 students taking the June 2019 final MBBS examination of the UWI. |
Failure and honors rates by GPA band for all campuses are shown in Table 4. Overall failure rates among students with GPAs in the range of 3.00–3.49 and 3.50–3.99 were 8.88% and 1.14%, respectively. Figure 3 shows GPA and failure rates by country.
Correlations of GPA with written, OSCE, and combined (written + OSCE) final exam scores for each discipline by campus are shown in Table 5. We use the following convention for characterizing strength of association: r > 0.7 strong; r = 0.5–0.7 moderate; r < 0.5 weak. The GPA and the grades from the written sub-component of the Medicine and Therapeutics component exam had a strong positive correlation in cohort 1 and moderate positive correlation in the remaining cohorts 2 and 3. The GPA and the grades from the written OnG exams had a moderate positive correlation in all three cohorts. The GPA and the grades from written Surgery exams had a moderate positive correlation in cohorts 1 and 3 and a weak positive correlation in cohort 2. The GPA and the grades from the OSCE subcomponent of MnT exams had a moderate positive correlation in all three cohorts of students. The GPA and the grades from the OnG OSCE had a moderate positive correlation in cohorts 1 and 3 and a weak positive correlation in cohort 2. The GPA and the grades from the Surgery OSCE had a moderate positive correlation in cohorts 1 and 3 and a weak positive correlation in cohort 2. The GPA and the combined grades from both the written and OSCE sub-components of the MnT had strong positive correlation in cohorts 1 and 3 and a moderate positive correlation in cohort 2. The GPA and the combined grades from the written and OSCE in OnG had a moderate positive correlation in all three cohorts. The combined grades from the Surgery exam and GPA had a moderate positive correlation in cohorts 1 and 3 and a weak positive correlation for campus 2.
The predictive value of GPA for the grades in the final examination is shown in Table 6. R2 values for cohort 2 were 0.5 or less for all components of the final examination. Table 6 also shows the predictive value of the GPA range for the overall failure in the final MBBS examination. The predictive value (the true negative which represented the failed students) of GPA scores in the range of 2.00–2.49 was 0.26 (95% CI = 0.14–0.41), whereas the predictive value of the PGA scores in the range of 2.00–2.99 was 0.70 (95% CI = 0.55–0.81).
Discussion
Key findings of the present study:
- Correlation Between GPA and Final Examinations: Generally strong correlations between GPAs derived from course performance in the MBBS program and scores and written and clinical final examinations are evidence of criterion-related validity and suggest that both types of assessments address similar content domains. In the present study, correlations between in-course and final exam performance were all significant and positive, but the strength of relationships varied considerably. In particular, the predictive power of GPA was weaker for campus 2.
- Utility of Final Examinations: The high correlation of exit exam scores with GPA from continuous in-program assessment raises important questions about the incremental utility of final examinations for award of the MBBS degree and practice certification. The discussion of whether final exams are necessary given their strong relationship with in-course performance is not new11 but remains relevant, especially for medical schools operating in resource-limited settings. The multiple Small Island Developing States comprising the UWI medical program are such a setting, and the economic and administrative burdens of exit examinations are considerable for the University.
- Inter-Campus Differences in GPA: While the present findings suggest that in-course GPA has the potential to function as the primary criterion for award of degree and certification, more research is needed to understand inter-campus differences contributing to GPA.
- Variations in GPA Across Campuses: Significant differences in mean GPA scores among campus countries suggests that, although considerable efforts continue to ensure harmonization of curriculum regionally, standards for assessment, particularly in terms of standard setting, or actual student performance, may vary by campus. Thus, although GPA scores correlate with final exam performance, further research is needed to understand these differences. Larger studies including data from multiple years of exam performance are especially needed. Further efforts to strengthen cross-campus harmonization of teaching and in-course assessment rubrics as well as curriculum may be useful.
The analysis of the final MBBS examination results across three cohorts revealed important insights into the consistency and validity of assessment practices at the UWI. Lack of any significant difference in the mean total grades in the three major disciplines of the Final MBBS Examination among the three campuses of the UWI provides good statistical evidence of internal consistency of the final MBBS examination at the UWI. This finding suggests that the core components of the final MBBS examination maintain a standardized level of difficulty and rigor. A prior study done over 20 years ago in the authors settings analyzed the Objective Structured Clinical Examination (OSCE) scores in Medicine and Therapeutics across UWI’s four campuses over two years. The findings demonstrated that OSCE scores were generally uniform across the campuses, confirming the consistency of teaching approaches and validating the efficacy of the medical graduates produced by UWI.12 Additionally, an item analysis of Multiple Choice Questions (MCQs) and Extended Matching Questions (EMQs) administered to 532 examinees across UWI’s four campuses during the final MBBS Medicine and Therapeutics examination of 2019 showed consistent performance measures, further supporting the standardization of the examination process.13
However, the significant discrepancy in the Obstetrics and Gynecology (OnG) scores for cohort 2 challenges this assumption. The lower mean scores in cohort 2 may point to variations in instructional quality, assessment difficulty, or differences in the preparedness of students across campuses. Research supports this assertion. Several studies have shown that the clinical teaching environment, particularly outpatient settings, can influence OSCE performance in OnG; the presence of residents, especially in university-based hospitals, has also been linked to improved student outcomes; and gender differences have been observed, with female students often outperforming male peers in OnG clerkships.14 Such disparities highlight the need for thorough alignment of teaching methodologies, clinical exposure opportunities, and examination standards to ensure fairness and equity.
The differences in mean GPA scores among the cohorts are another critical observation. Although, the same GPA calculation scheme was used for all three cohorts of students, the mean GPA scores among the three cohorts had statistically significant differences. Such differences may be due to the different standard of assessment or different assessment methods used on different campuses. When there are geographically separate campuses having its own faculty, harmonization of the assessment methods along with the other aspects of the curriculum across different campuses becomes imperative to avoid unintended bias and to uphold the integrity of GPA as a universal metric of student performance. Studies of assessment across multiple sites have shown that such consistency is possible through a standardized curriculum and rubrics, national guideline-based content, centralized faculty development, and the exclusive use of objective, structured evaluation tools.15 Without such harmonization, GPAs risk being unreliable indicators when comparing students from different cohorts or campuses, potentially affecting decisions related to student progression and post-graduate opportunities.
We know that the reliability and validity of assessments is context specific and there are reports suggesting the final examination is not related to a student’s clinical experiences, hence calling into question the validity of final examinations.16 The GPA had a strong positive correlation with the overall total grades from all three disciplines of the final MBBS examination with exception of the overall total grades from cohort 2. The GPA had a strong to moderate positive correlation with the total scores in each of the three disciplines in all cohorts with the exception of the surgery total grades from cohort 2. These findings provide validation for both these results.17 Theoretically, this supports the criterion-related validity of GPA as a predictor of final examination performance and reinforces its value in competency-based medical education. Practically, it suggests that continuous in-course assessments could be used to identify struggling students early and reduce dependency on high-stakes final exams, particularly in resource-constrained settings. Future research should investigate inter-campus variability in GPA and explore ways to standardize both formative and summative assessments across the region to enhance fairness, consistency, and educational quality.
Conclusion
In summary, GPA demonstrated inconsistent predictive value for final grades, with notable variation across disciplines and exam components. These findings underscore the urgent need for regional standardization of both summative and formative assessments. A study that integrated formative clinical skills examinations with real-time feedback and coaching into the early medical curriculum found that students valued the experience, reported improved clarity on their learning needs, and were able to apply feedback to subsequent summative assessments. Despite being resource-intensive, this mode of skills training enhanced the learning environment, reduced anxiety, and promoted self-regulated learning without negatively impacting student performance.18 The authors of that paper recommend early and structured adoption to support reflective practice and skill development. Given the inconsistencies observed in GPA predictability and assessment outcomes across the groups in this paper, and in light of the broader push for more equitable and consistent evaluation methods, these findings collectively strengthen the case for implementing a unified final examination that could serve as a licensing exam for the Caribbean region.19 Lastly, future research should aim to explore the underlying causes of the observed differences, including curriculum harmonization, discrepancies in clinical exposure, and local assessment practices.
Acknowledgments
This abstract of this paper was presented at the 6th International Anatomical Sciences and Cell Biology Conference 2022 as a poster presentation with interim findings. The poster’s abstract was published in the Conference Proceedings. DOI: 10.13140/RG.2.2.13162.39362.
Disclosure
Dr Md Anwarul Azim Majumder is the Editor-in-Chief of Advances in Medical Education and Practice. The other authors report no conflicts of interest in this work.
References
1. Majumder MAA, Kumar A, Krishnamurthy K, Ojeh N, Adams OP, Sa B. An evaluative study of objective structured clinical examination (OSCE): students and examiners perspectives. Adv Med Educ Pract. 2019;10:387–397. doi:10.2147/AMEP.S197275
2. Clauser BE, Margolis MJ, Case SM. Testing for licensure and certification in the professions. In: Brennan RL, editor. Educational Measurement.
3. Association of American Medical Association. 2021. Curriculum reports: assessment methods in clinical clerkship experiences (formative and/or summative). Assessment methods in clinical clerkship experiences (formative and/or summative) | AAMC.
4. Yuan X. Evidence of the spacing effect and influences on perceptions of learning and science curricula. Cureus. 2022;14(1):e21201. doi:10.7759/cureus.21201
5. Der Vleuten CP V. The assessment of professional competence: developments, research and practical implications. Adv Health Sci Educ Theory Pract. 1996;1(1):41–67. doi:10.1007/BF00596229
6. Epstein RM. Assessment in medical education. New England J. 2007;356(4):387–396. doi:10.1056/NEJMra054784
7. Epstein RM, Hundert EM. Defining and assessing professional competence. JAMA. 2002;287(2):226–235. doi:10.1001/jama.287.2.226
8. Epstein RM, Dannefer EF, Nofziger AC, et al. Comprehensive assessment of professional competence: the Rochester experiment. Teaching Learning Med. 2004;16(2):186–196. doi:10.1207/s15328015tlm1602_12
9. Pepple DJ, Young LE, Gordon-Strachan GM, Carroll RG. Pre-clinical grades predict clinical performance in the MBBS stage II examination at the University of the West Indies, Mona Campus. Nigerian J Physiological Sci. 2013;28(2):201–204.
10. Al-Wardy NM, Rizvi SG, Bayoumi RA. Is performance in pre-clinical assessment a good predictor of the final Doctor of Medicine grade? Saudi Med J. 2009;30(12):1590–1594.
11. Zahn CM, Saguil A, Artino AR Jr, et al. Correlation of national board of medical examiners scores with United States medical licensing examination Step 1 and step 2 scores. Acad Med. 2012;87(10):1348–1354. doi:10.1097/ACM.0b013e31826a13bd
12. Hickling F, Morgan K, Abel W, et al. A comparison of the objective structured clinical examination results across campuses of The University of the West Indies (2001 and 2002). West Indian Med J. 2005;54:139–143. doi:10.1590/S0043-31442005000200011
13. Kumar A, George C, Campbell MH, et al. Item analysis of multiple choice and extended matching questions in the final MBBS medicine and therapeutics examination. J Med Edu. 2022;21(1):e129450. doi:10.5812/jme-129450
14. Myles TD. Obstetrics and gynecology final examination scores at university and community hospitals. A comparison. J Reprod Med. 2001;46(4):371–375. PMID: 11354839.
15. Judd CA, Dong T, Foster C, Durning SJ, Hickey PW. Evaluating intersite consistency across 11 geographically distinct pediatric clerkship training sites: providing assurance that educational comparability is possible. Mil Med. 2023;188(Suppl 2):81–86. doi:10.1093/milmed/usad044.
16. McManus IC, Richards P, Winder BC, Sproston KA. Clinical experience, performance in final examinations, and learning style in medical students: prospective study. BMJ. 1998;316(7128):345–350. doi:10.1136/bmj.316.7128.345
17. Wilkinson TJ, Frampton CM. Comprehensive undergraduate medical assessments improve prediction of clinical performance. Med Educ. 2004;38(10):1111–1116. doi:10.1111/j.1365-2929.2004.01962.x
18. Sirianansopa K. Evaluating students’ learning achievements using the formative assessment technique: a retrospective study. BMC Med Educ. 2024;24(1373). doi:10.1186/s12909-024-06347-5
19. Carswell F, Primavesi R, Ward P. Qualifying exams for medical students: are both major finals and continuous assessment necessary? Med Teach. 1987;9(1):83–90. doi:10.3109/01421598709028983
© 2025 The Author(s). This work is published and licensed by Dove Medical Press Limited. The
full terms of this license are available at https://www.dovepress.com/terms.php
and incorporate the Creative Commons Attribution
- Non Commercial (unported, 4.0) License.
By accessing the work you hereby accept the Terms. Non-commercial uses of the work are permitted
without any further permission from Dove Medical Press Limited, provided the work is properly
attributed. For permission for commercial use of this work, please see paragraphs 4.2 and 5 of our Terms.