Back to Journals » Advances in Medical Education and Practice » Volume 16

Perception of Medical and Nursing Students Plus Clinical Instructors Towards Objective Structured Clinical Examination: A Case Study of Five Health Training Institutions in Sub-Saharan Africa

Authors Kibuuka R , Mpasa F, Kanyike AM , Ndikom CM, Kaminga AC, Owusu-Sekyere S, Ogah A, Kusi Amponsah A , Kiyimba K , Obakiro SB , Munthali G , Msowoya WK, Kibuule D, Phiri EC, Baluwa M , Phiri T , Katuramu R

Received 30 January 2025

Accepted for publication 24 June 2025

Published 28 June 2025 Volume 2025:16 Pages 1103—1127

DOI https://doi.org/10.2147/AMEP.S520065

Checked for plagiarism Yes

Review by Single anonymous peer review

Peer reviewer comments 3

Editor who approved publication: Dr Sateesh B Arja



Perception of Health Care Professions Towards OSCE – Video abstract [520065]

Views: 29

Ronald Kibuuka,1 Ferestas Mpasa,2 Andrew Marvin Kanyike,1 Chizoma Millicent Ndikom,3 Atipatsa Chiwanda Kaminga,2 Samuel Owusu-Sekyere,4 Adenike Ogah,5 Abigail Kusi Amponsah,6 Kennedy Kiyimba,1 Samuel Baker Obakiro,1 Getrude Munthali,2 Wanangwa Kenneth Msowoya,2 Dan Kibuule,7 Etta Chimbe Phiri,2 Masumbuko Baluwa,2 Tamara Phiri,2 Richard Katuramu8

1Department of Pharmacology and Therapeutics, Busitema University Faculty of Health Sciences, Mbale, Uganda; 2Department of Nursing and Midwifery, Mzuzu University, Mzuzu, Malawi; 3Department of Nursing, College of Medicine, University of Ibadan, Ibadan, Nigeria; 4African Forum for Research and Education in Health, Kumasi, Ghana; 5Department of Pediatrics and Child Health, School of Medicine, University of Zambia, Lusaka, Zambia; 6Department of Public Health Nursing, Kwame Nkrumah University of Science and Technology, Kumasi, Ghana; 7Office of the Dean, Busitema University, Faculty of Health Sciences, Mbale, Uganda; 8Department of Internal Medicine, Busitema University Faculty of Health Sciences, Mbale, Uganda

Correspondence: Ronald Kibuuka, Email [email protected]; [email protected]

Background: Objective Structured Clinical Examination (OSCE) is the gold standard for assessing clinical competencies. However, resource constraints and logistical challenges in Sub-Saharan Africa (SSA) hinder its effectiveness. This study investigated the perceptions and experiences of medical and nursing students and clinical instructors toward OSCE in Sub-Saharan Africa (SSA).
Methods: A mixed-methods sequential explanatory design was utilized involving 686 undergraduate health care students and 46 clinical instructors from Busitema University (Uganda), Mzuzu University (Malawi), University of Ibadan (Nigeria), Kwame Nkrumah University of Science and Technology (Ghana), and University of Zambia (Zambia). Quantitative responses were analyzed using IBM SPSS Statistics version 25, with comparisons between medical and nursing student responses made using chi-square test. Qualitative data were thematically analyzed.
Results: A total of 686 students and 42 clinical instructors participated in the study. Majority of students 57.6% (n = 395, P-value < 0.001) and 71.8% (n = 33) instructors recognized OSCE as a comprehensive tool for assessing clinical skills and knowledge, respectively. Among students, 80.8% (n = 554, P-value = 0.031), 66.6% (n = 457, P-value = 0.001), 66.6% (n = 456, P-value = 0.020) and 61.4% (n = 421, P-value = 0.001) cited anxiety, station timing, examiners’ behavior and content load as factors influencing performance. Of the clinical instructors 58.7% (n = 27) noted that it takes longer time to prepare scenarios, however 71.8% (n = 33) highlighted its objectivity. Students praised OSCE’s objectivity but criticized insufficient time on some stations and organizational issues. Facilitators cited objectivity and competence assessment but noted resource insufficiencies and student stress. Suggestions for improvement included mock OSCEs, training of clinical instructors, mixed method assessment and feedback to improve performance.
Conclusion: In conclusion, while OSCE demonstrates significant strengths in promoting fairness in assessing clinical competencies, addressing logistical challenges, examiner variability, student anxiety, and timely feedback is crucial.

Plain Language Summary: This study explored the perception of medical and nursing students and clinical instructors Objective Structured Clinical Examination (OSCE) in five sub-Saharan African health training institutions (Uganda, Malawi, Nigeria, Ghana, Zambia). We used questionnaires among 686 students and 46 clinical instructors, focused group discussions, and key informant interviews to find out their perception. Both students and clinical instructors mostly preferred OSCE because it is fair, organized, and tests real healthcare skills more than the conventional methods like long case. But students said the time on some stations was too short, stressful, and noted challenges during OSCE such as lack of some materials on some stations. Teachers agreed that OSCE is helpful but hard to do due to insufficient staffing and space. Both suggested more practice sessions through Pre OSCE, better training for teachers, and feedback to help students improve. The study confirms that OSCE is the most efficient way to evaluate nursing and medical students, however the study also highlights that a mixed method evaluation would be the best option for evaluation of clinical competence.

Keywords: OSCE, health care professions education, Sub-Saharan Africa, medical assessment

Introduction

Objective Structured Clinical Examination (OSCE) is increasingly recognized as a superior method for assessing clinical competencies in health professional education compared to other methods such as log book, long case and case studies.1 It was introduced as a mode of assessment of students in medical school in 1975 by Harden and Gleeson.2 OSCE provides a standardized and structured setting in which students can demonstrate clinical skills, decision-making, and communication in a series of simulated and real time scenarios that reflect real-life clinical challenges, giving students the opportunity to apply theoretical knowledge in a controlled yet practical environment.3 Checklists and trained examiners enhance objectivity. This makes OSCE a gold standard for the assessment of clinical skills.2,4,5 However, Hodges Bet al 1999 concluded that binary checklists, which are a way of the rating scales in OSCE, may not be a valid measure of increasing competence.6

The use of OSCE by most parts of the world underpins its perceived reliability and adaptability. Significant impacts have reportedly occurred across several regions, with studies in North Africa focusing on inclusive assessment of skills, similar work in Europe, indicated standardization of clinical assessments through a structured format.7,8 OSCE’s ability to assess the cognitive, psychomotor, and affective domains of learning has been hailed as a major strength over other traditional methods such as long cases, logbooks, and unstructured clinical assessments.9 Several studies have identified its fairness, coverage of diverse clinical skills, and its capacity to simulate real-life scenarios, making it a preferred choice in various healthcare training institutions.1,5,10

Despite these advantages, OSCE is by no means devoid of criticism. Highly demanding logistics, resource-intensiveness, and related stress from both students and examiners have been repeatedly reported.11,12 These challenges could compromise the overall effectiveness of OSCE through logistical bottlenecks, poor consistency in examination administration, and increased levels of anxiety, which could affect students’ performance and examiners’ objectivity. In addition, infrastructural and financial limitations make it difficult to conduct in resource-poor settings like Sub-Saharan Africa. Such difficulties often account for differences in perception about the efficacy of OSCE among students and clinical instructors alike.

Within Sub-Saharan Africa (SSA), where health education is a matter of prime priority to meet high regional health demands, OSCE is one of the leading approaches for ensuring that healthcare professions students have the required competencies to deliver improved health services. OSCE is a vital tool for enhancing healthcare professional competency in Sub-Saharan Africa and the world at large, offering a structured and reliable method for assessing clinical skills amidst the region’s complex health landscape. Whereas OSCE has been identified as a method of assessing student acquisition of the Medical Education Partnership for Equitable Services to all Ugandans (MESAU) competencies, some of the medical schools continue to experience the challenge of implementing this method.13 There is need for effective health education strategies that can adequately prepare professionals to address these pressing health issues.14,15 The debate over the ability of OSCE to comprehensively assess clinical competency is indeed polarizing among educators and practitioners. This ongoing discussion highlights concerns regarding the reliability and effectiveness of OSCE, particularly in different cultural and educational contexts. While OSCE is widely used, its reliability is often debated, leading to varying opinions on its overall efficacy as a comprehensive assessment tool for clinical skills.16,17

This study aims to explore the perceptions and experiences of medical and nursing students plus clinical instructors regarding OSCE in healthcare training institutions across five selected institutions from five different countries in Sub-Saharan Africa. Thus, the study will establish strengths, weaknesses, and modifications that can be made for further. The findings provide useful insight into optimizing OSCE. These will help health institutions, educators, and policymakers in pursuit of improving assessment strategies in order to enhance the competence of health professionals in resource-poor countries.

Methods

Study Design

A mixed-method sequential explanatory research design that involved both quantitative and qualitative methods was carried out involving medical and nursing students plus clinical instructors. The quantitative study was a cross-sectional design while the qualitative study involved Focused Group Discussions with students and in-depth interviews with clinical instructors.

Study Setting and Participants

The study was conducted across five health training Universities in five SSA Institutions: Busitema University (Uganda), Mzuzu University (Malawi), University of Ibadan (Nigeria), Kwame Nkrumah University of Science and Technology (Ghana), and the University of Zambia (Zambia).

Busitema University, Uganda: Busitema University is a multi-campus public university in Eastern Uganda. It was established in 2007 by Statutory Instrument No. 22 under the Universities and Other Tertiary Institutions Act. Its mission is to provide high-standard training and engage in research that drives socio-economic transformation and sustainable development. Busitema University holds a commitment to academic excellence as it contributes towards the greater development activities of Uganda through research and community outreach.

Mzuzu University, commonly known as MZUNI, Malawi: Chartered in 1997 under Chapter 30:09 of the Laws of Malawi, Mzuzu University is a public institution of higher learning committed to fostering knowledge and offering quality education, research, and training. The university aims to meet the educational expectations for Malawi, Africa, and the global community. Courses offered include, from Certificate to Doctorate, and are divided into six faculties that encompass Health Sciences, Education, and Environmental Sciences. It is a dual-mode institution offering face-to-face and Open Distance e-Learning. In addition, over 50 research projects have so far been successfully executed within the university with major funding from international organizations such as the World Bank, ACIAR, and USAID. These projects include, among others, ICT, climate change, health, and renewable energy studies.

The University of Ibadan, Nigeria, was established in 1948 initially as a college of the University of London, then becoming a full-fledged independent university in 1962. Largely recognized as Nigeria’s premier university and its first, the mission of the university is “to expand the frontiers of knowledge through academic excellence geared towards meeting the needs of society.” UI’s College of Medicine is the first in Nigeria and was established in 1948, followed by the pioneering Nursing program in 1965. The nursing program in UI has produced many nurse leaders functioning across the globe.

Kwame Nkrumah University of Science and Technology, Ghana: KNUST was founded in 1951 as the Kumasi College of Technology and became a full-fledged university in 1952; it was later renamed Kwame Nkrumah University of Science and Technology. The vision for this university is that it should be among the leading science and technology institutions in Africa, driving the creation and utilization of knowledge through research, entailing quality teaching and learning, and community involvement through outreach. KNUST commits itself to the promotion of innovation, entrepreneurship, and technological leadership in all its academic and research programs.

University of Zambia, Zambia: The University of Zambia was established in 1965 and is the largest and oldest learning institution in Zambia. The university officially opened its doors in 1966. UNZA provides education through the medium of English and lies at the heart of Zambia’s education and research. It is one of the most important universities, particularly in training in the areas of health and medical sciences, in furthering academic development both nationally and regionally.

The study sites were selected because they were part of the joint-funded project by AFREhealth small grants Cohort 2 (2023) and also ensuring SSA regional representation. They have different faculties which include Faculty of engineering, faculty of biological sciences, faculty of education, faculty of agriculture and faculty of health sciences. We then chose students who were in the faculty of health sciences offering Bachelor of medicine and Bachelor of Surgery and Bachelor of Nursing plus clinical instructors. The study aimed to establish the effectiveness of the OSCE as an assessment tool among students doing medicine and those doing nursing plus their clinical instructors.

Study Population and Sample

The study population was undergraduate healthcare professions students and clinical instructors from the five participating institutions. Eligible participants included medical and nursing students who had been involved in at least one OSCE by the time data was collected. Clinical instructors who had been directly involved in the administration and evaluation of OSCE were also included.

The student sample size determination in this study was based on the formula developed by Krejcie and Morgan (1970), which is widely applied for the calculation of sample size in finite populations.18 The formula gives a systematic approach that helps in ensuring the accuracy of the statistical results because it considers the population size, desired confidence level, and margin of error. Key variables considered in the calculation include the z-score, population proportion, and total population size. Based on these calculations, the final sample size selected for the study was 571 students. Ultimately, 764 students were recruited, of which 78 had never participated in any OSCE, these were excluded leaving 686 eligible student participants, surpassing the initially estimated sample size both at each institution level and the total, reinforcing the statistical validity of the results. All instructors who consented to participate in the study were recruited. This distribution represents varied students who can give fact-based information to help this study achieve its aims. For the qualitative part Focus Group Discussions (FDGs) for students and Key Informant Interviews for clinical instructors were carried out until when no more new themes emerged (data saturation).

Sampling Technique

A stratified random sampling technique was developed in order to capture the sample across various academic years, programs, and instructor roles. The stratification consisted of first- to sixth-year students in various health professions programs and instructors within the pediatrics, surgery, internal medicine, obstetrics and gynecology, and nursing departments. Allocation of the final sample was proportionately distributed to each institution based on population size for eligible participants. Only students and instructors that had ever been involved in OSCE were included in the study.

Data Collection Instruments and Procedures

Data collection was carried out over a period of four weeks, using semi-structured interviews adapted from Fisseha 202119 sent via a Google Form link to the class WhatsApp groups of all eligible years of study in the different institutions taking part in the study, FGDs were also carried out to get a deeper insight into student perceptions and experiences with evaluation in addition to KIIs with clinical instructors. A total of six FGDs were conducted, one from each institution and one carried out via zoom with two representatives from each institution. Busitema University Faculty, Uganda (FGD 001), Mzuzu University, Malawi (FGD 002), The University of Ibadan, Nigeria (FGD 003), Kwame Nkrumah University of Science and Technology, Ghana (FGD 004), and University of Zambia, Zambia (FGD 005). An additional mixed-group (FGD 006) was conducted, bringing together students from different institutions and courses to allow for cross-disciplinary and cross-country discussion. Each FGD consisted of five participants with FGD (006) having ten participants two from each institution. The discussions lasted between 60 and 90 minutes. This allowed for an in-depth exploration of shared and divergent experiences with OSCE and other evaluation methods across different institutions and disciplines.

In addition to the FGDs, ten KIIs were conducted with clinical instructors, each lasting between 20 and 30 minutes.

An open-ended interview guide was used during both the FGDs and KIIs to ensure consistency across discussions. All interviews and discussions were audio-recorded with participants’ consent and subsequently transcribed verbatim for analysis.

Data Analysis

Quantitative data was downloaded as a Microsoft Excel sheet, cleaned and analyzed using IBM SPSS Statistics version 25. Descriptive statistics, including means, standard deviations, and frequencies, were used to summarize the data. Statistical significance of difference between medicine and nursing student responses was then assessed using chi square test, with P-values reported for each item. Instructor responses were analyzed separately to assess their unique perspectives on OSCE efficacy and administration.

On the other hand, qualitative data was analyzed using thematic analysis, following Braun and Clarke’s six-step framework. This method allowed for a systematic exploration of recurring themes and patterns within the data. The process began with familiarization, where transcripts were read and re-read to identify key ideas. This was followed by the generation of initial codes, where data was organized into meaningful groups based on the themes emerging from the transcripts.

Once the themes were identified, they were reviewed and refined to ensure they accurately captured the data. The themes were then defined and supported by representative quotes from participants to illustrate key points. Coding was performed by two independent researchers to increase the reliability of the analysis.

Ethical Considerations

Ethical approval was provided by Busitema University Research Ethics Committee (BUFHS-2023-132), and administrative clearance was sought and obtained from the rest of the participating institutions. Prior to obtaining ethical clearance and administrative clearance from the respective bodies, a Memorandum of Understanding (MOU) was signed by the Vice Chancellors of the participating institutions, providing the framework for the inter-institutional collaboration on this study.

In line with the Declaration of Helsinki, informed consent was obtained from all participants, and their confidentiality and anonymity were strictly maintained. Participation was entirely voluntary, with no consequences for withdrawal at any stage. The study posed no harm to participants, and they were fully informed about its purpose, procedures, potential risks, and benefits. All data collected were securely stored and used exclusively for research purposes, ensuring adherence to the highest ethical standards throughout the study.

Results

Demographic Characteristics of Participants

Students

A total of 764 students participated in this study as highlighted in Table 1. They were mostly female 61.9% (n = 473). Majority of students were Christian 90.4% (n = 691), followed by Islam 5.8% (n = 44). Majority were in Year three 30.6% (n = 234), then Year four 29.8% (n = 228). The mean age of the students was 25.38 ± 5.00 years (range: 18–48 years).

Table 1 Characteristics of the Participants

Majority, 64.5% (n = 493) of the students were on Nursing program and 35.5% (n = 271) were on Bachelor of Medicine and Bachelor of Surgery (MBChB) program. All the students 100% (n = 764) were at the bachelor’s degree level with 65.4% (n = 500) being direct entrants. Participants were enrolled from five universities, namely: Kwame Nkrumah University 25.9% (n = 198), Mzuzu University 25.5% (n = 195), University of Zambia Lusaka 19.8% (n = 151), University of Ibadan, Nigeria, 17.7% (n = 135), and Busitema University 11.1% (n = 85). The majority of students 89.8% (n = 686) had ever attended an Objective Structured Clinical Examination (OSCE).

Clinical Instructors

Table 1 further highlights 46 clinical instructors that participated in the study. Most were female 60.9% (n = 28). By institution, 28.3% (n = 13) were from Kwame Nkrumah University, 19.6% (n = 9) from University of Ibadan, Nigeria, 19.6% (n = 9) from Mzuzu University, 17.4% (n = 8) from Busitema University, and 15.2% (n = 7) from University of Zambia, Lusaka.

Academic positions of instructors varied, with 34.8% (n = 16) being Lecturers, 26.1% (n = 12) Senior Lecturers, 26.0% (n = 12) Assistant Lecturers, and 13.1% (n = 6) Teaching Assistants. Half of the participants 50.0% (n = 23) belonged to the Nursing department. In terms of professional title, half 50.0% (n = 23) were nurses, followed by 28.3% (n = 13) consultants and 21.7% (n = 10) medical doctors. The level of education attained varied, with majority, 56.5% (n = 26) holding a Master’s degree, 28.3% (n = 13) holding a Bachelor’s degree, and 15.3% (n = 9) holding a PhD. Age distribution of teachers indicated that majority, 37.0% (n = 17) were between the ages of 36–45 years. More than half of the teachers (52.2%, n = 24) had ≤5 years of service.

Students’ Perception of OSCE

Table 2 shows students’ views of OSCE on various attributes of medicine and nursing students alongside the overall. The table also shows that statistical significance of difference between nursing and medicine students’ responses. OSCE’s fairness was acknowledged by 47.7% (n = 327, P-value = 0.258). OSCE was noted to assess a wide field of knowledge by 54.8% (n = 376, P-value = 0.169) of the respondents. However, majority of the students 74.8% (n = 513, P-value = 0.001) agreed that more time is needed on stations. Administration of OSCE was acknowledged by 43.0% (n = 295, P-value = 0.024). OSCE was considered very stressful by 61.5% (n = 422, P-value < 0.001) of students and intimidating by 51.5% (n = 353, P-value = 0.181). In regard to organizational and sequential structuring, 53.2% (n = 365, P-value < 0.001) agreed that the OSCE was well structured. OSCE was noted to decrease the possibility of failure by 34.7% (n = 238, P-value < 0.001).

Table 2 Attributes of OSCE

OSCE was noted to be less stressful than other assessment methods by only 25.8% (n = 177, P-value = 0.033). Additionally, 49.6% (n = 340, P-value < 0.001) agreed OSCE left room for compensation in some areas. Further 62.2% (n = 427, P-value = 0.001) of students reported that OSCE highlights areas of weakness and 49.1% (n = 337, P-value = 0.115) of students noted that they were aware of the level of knowledge needed in OSCE. Lastly, 57.6% (n = 395, P-value < 0.001) agreed that OSCE covered a wide range of clinical skills.

Qualitative Findings on Clinical Students OSCE Perception and Experience

General Perception of OSCE

Clinical students typically described OSCE as a structured, timed examination with many stations. The majority of these stations required students to perform clinical activities, interpret situations, or give answers in the presence or absence of the examiner. The majority of students appreciated the OSCE format, appreciating its objectivity, wide range of skills tested, and standardization.

..OSCE is a form of assessment that is organized in a way that you reach a station, the questions are there, timed, and you answer the questions in that station... (FGD 001, ...Participant 3) another participant emphasized,

..It’s like an objective exam...tasked to do different things at different stations within a given time frame... (FGD 001, Participant 3)

Majority of clinical students further asserted that underperformance in a particular station is compensated by achievement in other stations.

..It examines us on many cases…not like one case in long case exams…and gives us a better chance to perform overall using many stations…. (FGD 005, Participant 2)

However, others grumbled that time allocated per station was typically less than sufficient, especially for complex procedures, and imposed unnecessary stress.

..Some questions are too long to be answered in five minutes. You leave the station feeling like you didn’t show your knowledge well... (FGD 004, Participant 1)

Perceived Advantages of the OSCE

Many students acknowledged the benefits of OSCE in preparing them for real-world clinical practice. Students noted its comprehensive assessment of skill, Standardization and exposure to a variety Clinical Scenarios. One participant noted:

..Generally, a good one. several stations. prepares us well for practical application in the hospital... (FGD 004, Participant 7)

Challenges Faced by Students in OSCE

Despite the advantages, students noted several challenges faced during OSCEs. These included:

(a) Time Constraints

Many students felt that the allocated time per station was inadequate to fully demonstrate their competencies. One student lamented:

..Usually, they give us a maximum of around seven minutes per station. Sometimes, that’s not enough to complete all tasks... (FGD 001, Participant 3)

(b) Examiner Influence and Bias

Participants described instances where they felt examiner bias or inconsistencies in marking. Examiner presence was also a concern, as some students felt observed rather than assessed. One student noted

..Sometimes, lecturers are not present. Some come late and only ask a few things...this affects fairness... another student added ...some examiners tend to give lower marks... (FGD 003, Participant 2)

(c) Complexity of Cases and Station Setup

Some stations were deemed too complex or unrealistic. Students were concerned about the fairness of case distribution and the variability in difficulty across stations.

..some stations had cases we had never encountered, yet we were expected to manage them in five minutes... (FGD 003, Participant 2)

(d) Logistical Challenges and Technical Issues

Participants highlighted logistical challenges including Poor station organization, Insufficient resources for clinical demonstrations, lack of clarity in instructions at some stations.

One student noted

..some stations lack the necessary materials, like gloves or stethoscopes, which makes it difficult to complete tasks... (FGD 001, Participant 1)

Student Coping Mechanisms and Recommendations

Students employed various strategies to navigate OSCE challenges, often forming study groups and practicing OSCE scenarios among themselves. One student explained:

..We try to simulate the OSCE format in small groups, taking turns acting as examiners and candidates. This way, we become familiar with the time constraints and expectations... (FGD 005, Participant 2)

Peer learning and discussions were widely used as coping strategies. Some students reviewed past OSCE experiences and case scenarios, while others sought guidance from senior students. A participant shared:

..Talking to senior students who had done the OSCE before helped me understand what to expect and how to prepare better... (FGD 003, Participant 2)

Students also emphasized the role of mental preparation, as one explained:

..I practice calming techniques before the exam because the pressure can be overwhelming. If I don’t control my anxiety, I forget even the simplest procedures…(FGD 004, Participant 5)

Recommended Improvements

Students suggested several recommendations towards improving OSCE experience and these included better Time Allocation, standardization of examiner Involvement, improved resource availability, balanced case distribution, pre-OSCE Orientation and mock exams.

One student proposed:

..If we had regular mock OSCEs before the actual exam, we would perform better because we would already be used to the format... (FGD 002, Participant 3)

Structure of OSCE

Table 3 presents medicine and nursing students’ perceptions of the OSCE structure across eight key aspects. The table also shows the statistical significance of difference between nursing and medicine students’ responses. More than half of the respondents, 53.5% (n =367) agreed that they were fully aware of the nature of the exam. Similarly, 53.2% (n = 365) of the respondents agreed that tasks in OSCE reflect those taught. Concerning sufficiency of time at stations, only 16.5% (n = 113) of the respondents agreed that time at each station was sufficient. Regarding station authenticity, only 33.7% (n = 231) of the respondents indicated that the setting and context at each station feels authentic, while majority, 54.8% (n = 376), agreed that the instructions given were clear and unambiguous. Similarly, 50.7% (n = 348) believed that the tasks given are fair. The sequence of the stations was considered to be logical and appropriate 48.5% (n = 333) of the respondents. Lastly, OSCE was considered to provide learning opportunities by 56.4% (n = 387) of the respondents. All responses had significant statistical difference between nursing and medicine students (P-value < 0.05).

Table 3 Structure of OSCE

Organization and Components of the OSCE

The Objective Structured Clinical Examination (OSCE) was viewed as a structured, timed series of stations designed to assess various clinical competencies. Each station presented a unique task requiring students to demonstrate their clinical knowledge, practical skills, and communication abilities. Participants described the structured nature of the OSCE, emphasizing its role in mirroring real-world clinical scenarios.

..You move from one station to another, encountering different cases, each testing a specific skill—history taking, physical examination, procedures, or communication... (FGD 003, Participant 5)

Time Allocation and Station Setup

Time constraints were a significant structural element of OSCE, with students typically allocated between five to ten minutes per station. While some appreciated the controlled environment, others felt that the time was insufficient for complex clinical scenarios.

..The time per station is short, especially when you need to perform multiple steps like taking history, examining the patient, and giving a diagnosis... (FGD 001, Participant 3)

Many students further reported that the uniform time slots for all stations did not match the complexity of tasks. Suggestions included allocating time based on station complexity or allowing flexible timing.

..There are some stations where even reading the question takes two minutes…you’ve already lost time... (FGD 002, Participant 5)

Students noted station design varieties, with some involving standardized patients while others relied on mannequins or theoretical cases. Students highlighted inconsistencies in station setup, with some stations well-prepared and others lacking essential materials.

..Some stations are well-equipped, while others do not even have the basic tools needed to complete the task. This usually affects our ability to demonstrate our skills properly... (FGD 005, Participant 1)

Role of Examiners and Standardized Patients

The presence of examiners at some stations was a defining aspect of the OSCE. Students recognized their role in assessing competency but also pointed out variations in how examiners conducted the assessments.

..Some examiners give clear instructions and guidance, while others just sit and observe without engaging, which makes it hard to know if you’re doing the right thing... (FGD 001, Participant 5)

The use of standardized patients was another structural feature. While beneficial in simulating real interactions, students reported inconsistencies in how standardized patients responded to clinical questions.

..Sometimes the simulated patients are very helpful, but in some cases, their responses are vague or misleading, making it difficult to reach a diagnosis... (FGD 004, Participant 2)

Marking Scheme and Feedback Mechanism

The OSCE followed a predetermined marking scheme where students were graded on specific competencies. While the structured marking was intended to ensure fairness, students reported variability in scoring and limited feedback.

..We don’t always get detailed feedback on our performance. After the OSCE, you just receive a score, but it’s unclear what specific areas you did well in or where you failed... (FGD 004, Participant 3)

Several students thus recommended improving OSCE structure by incorporating immediate feedback to enhance the learning process.

..If we could receive feedback right after completing a station, we would understand our mistakes and improve in real-time... (FGD 005, Participant 1)

Logistical Challenges and Variability in Station Design

Students identified several logistical challenges that impacted the OSCE structure, including variations in difficulty levels across stations, insufficient preparation time, and inadequate equipment at some stations.

..Some stations were straightforward, while others were unexpectedly difficult, and you couldn’t tell whether it was due to poor setup or if the cases were just harder... (FGD 002, Participant 3)

Another student echoed

..In some cases, we had to share stations because of limited space, which added extra pressure and reduced our ability to focus... (FGD 003, Participant 1)

Factors Influencing Student Performance During OSCE

Table 4 presents participants’ perceptions of the various factors influencing performance in OSCE. The table also shows the statistical significance of difference between nursing and medicine students’ responses. Majority of respondents, 66.6% (n = 457, P-value = 0.001), indicated that timing per station influenced their performance. Similarly, 61.4% (n = 421, P-value = 0.055) of respondents indicated that content load influences performance. Examiner behavior during the OSCE was considered to be a factor influencing performance by 66.5% (n = 456, P-value = 0.020) of the students. Anxiety was another factor, with 80.8% (n = 554, P-value = 0.031) indicating that it affected their performance.

Table 4 Factors Influencing Student Performance During OSCE

Delayed communication about the exam a factor noted, with 24.9% (n = 171, P-value = 0.954) stating that it influences performance. Unclear scenarios were found to be influential to performance by 45.6% (n = 313, P-value = 0.172) of students. Regarding personal preparation, 55.1% (n = 378, P-value < 0.001) of participants believed that a lack of self-preparation influences performance. Similarly, 55.2% (n = 379, P-value < 0.001) of respondents noted that the use of improvised equipment affects performance.

Level of Preparation and Study Approaches

The extent to which students prepared for the OSCE significantly influenced their performance. Many participants reported engaging in peer discussions, practical rehearsals, and independent study to familiarize themselves with expected clinical scenarios. Structured revision through simulation and practice sessions helped boost confidence and skill acquisition.

..We try to simulate the OSCE format in small groups, acting as examiners and candidates. This way, we become familiar with the time constraints and expectations... (FGD 005, Participant 2)

Anxiety and Psychological Pressure

Anxiety emerged as a significant determinant of student outcomes during the OSCE. Many students reported experiencing nervousness due to the high-stakes nature of the exam, strict time constraints, and the presence of examiners. The pressure to perform under observation sometimes led to errors in execution.

..The moment you enter the station, knowing someone is watching you, your mind can go blank even if you know the procedure well... (FGD 001, Participant 5)

Clarity of Station Instructions and Case Complexity

The nature of OSCE station instructions significantly influenced student performance. When instructions were clear and concise, students found it easier to navigate the tasks. However, ambiguous or overly complex case presentations created confusion and reduced efficiency.

..Some stations had very straightforward instructions, but others were vague, and it wasn’t clear what exactly we were supposed to do... (FGD 002, Participant 1)

Examiner Influence and Subjectivity

The role of examiners in the OSCE had a notable impact on student performance. While some examiners provided guidance and ensured fairness, others were perceived as overly strict or disengaged. Variability in marking criteria also raised concerns about the consistency of assessments.

..Some examiners are friendly and encourage us, but others seem uninterested, which makes us even more nervous... (FGD 005, Participant 1)

Students further noted out Examiner Conduct during OSCE as a factor that influences performance.

..Some lecturers will start quizzing you outside the station content...it’s unfair and demoralizing... (FGD 004, Participant 3)

Availability of Resources and Station Setup

The availability of necessary materials and equipment at each OSCE station influenced students’ ability to demonstrate their competencies effectively, this as noted by students affected performance. Participants reported inconsistencies in resource allocation, with some stations lacking essential tools.

..There were times when basic things like gloves or sphygmomanometers were missing, and we had to improvise. This affected how well we performed... (FGD 002, Participant 3)

Prior Exposure and Experience in Clinical Rotations

Students with prior clinical exposure and experience in hospital settings generally reported to have performed better in the OSCE.

.....Having done similar procedures in the hospital, I felt more confident in executing them during the OSCE... (FGD 003, Participant 2)

Bias

Most students noted out instructor bias as a factor affecting performance, this was generally attributed to gender bias and was noted at especially traditional methods and manned OSCE stations

..Gender also is a risk factor. if you’re a man, you are mostly going to be at a disadvantage... (FGD 003, Participant 4) another student reechoed ...Examiners are sometimes biased...chances of being discriminated are much more if you are male than if you’re a lady... (FGD 003, Participant 1)

Organization, Validity and Reliability of OSCE in Relation to Other Methods

Table 5 indicates participants’ views regarding the organization, validity, and reliability of OSCE in comparison to other techniques. The table also shows the statistical significance of difference between nursing and medicine students’ responses. Nearly half of respondents, 42.0% (n = 288, P-value = 0.006), agreed that OSCE scores provide a true measure of clinical skills compared to logbooks or long-case examinations. Similarly, 40.2% (n = 276, P-value = 0.072) considered OSCE scores as more standardized than other method of assessment.

Table 5 Organization, Validity and Reliability of OSCE

The experience of OSCE being practical and useful was noted by 61.5% (n = 422, P-value < 0.001) of students. In assessing equity in marking, 46.9% (n = 322, P-value = 0.215) felt that personality, ethnicity, and gender did not influence OSCE scores. Lastly, 39.7% (n = 272 P-value < 0.001) of respondents answered that involvement in the OSCE presented more difficulty compared to other types of evaluation.

Organization and Structural Integrity of OSCE

OSCE was described as a structured and objective assessment tool, designed to evaluate multiple clinical competencies within a controlled setting. Students highlighted its systematic nature, where candidates rotate through standardized stations, each focusing on specific clinical skills such as history-taking, physical examination, procedural tasks, and communication.

..The OSCE is well-organized because each station focuses on a different skill, ensuring that we are tested on a broad range of competencies... (FGD 006, Participant 10)

Despite its structured format, some students noted inconsistencies in station setup and variations in the complexity of tasks across different examination cycles. The presence of standardized patients and mannequins in some stations enhanced realism, but occasional logistical challenges, such as limited equipment and unclear instructions, affected the smooth execution of the assessment.

..While some stations are well-prepared, others lack basic tools, which affects our ability to demonstrate our skills properly... (FGD 002, Participant 3)

Validity of OSCE in Assessing Clinical Competence

Students largely recognized the OSCE as a valid assessment method, particularly in evaluating practical and decision-making skills. Unlike traditional written examinations, which primarily test theoretical knowledge, the OSCE allowed students to demonstrate their ability to apply clinical knowledge in simulated real-world scenarios.

..OSCE is better because it gives the standard, and at the end, I will be doing the same thing when I go to practice... (FGD 006, Participant 8)

Students noted the need for more experienced instructors in carrying out the more subjective traditional evaluation methods

..Long case exam should only involve senior lecturers who know what a student at their level is supposed to know... (FGD 001, Participant 2)

Further students compared it to the more theoretical evaluation methods,

..In written exams, you might recall information without knowing how to apply it. The OSCE forces you to think critically and act as you would in a real clinical setting... (FGD 003, Participant 1)

However, concerns were raised about the OSCE’s ability to comprehensively assess long-term clinical reasoning, as the limited time at each station may not always allow for in-depth evaluation of complex medical decision-making.

..Some cases require more time to analyze and solve, but the OSCE is fast-paced, sometimes making it difficult to showcase deep clinical reasoning... (FGD 001, Participant 4)

Reliability and Consistency of OSCE Results

The reliability of the OSCE was acknowledged due to its standardized nature, where all students are subjected to the same stations and assessed using predetermined marking schemes. This approach was seen as minimizing bias compared to traditional oral or practical exams, where examiners may apply subjective judgment.

..Since we all go through the same stations and are marked based on specific criteria, it feels fairer than other exams where different students might face different questions... (FGD 004, Participant 5)

Students further noted that due to the fairness of OSCE, it makes passing easy, and they emphasized that if someone fails in OSCE they are generally lacking, one student noted

..OSCE tries to pick the extremes. If you fail an OSCE exam, it means generally you are a very poor student... (FGD 005, Participant 1)

Students noted that unlike OSCE, the marks obtained in long case do not translate into practical proficiency as noted by one student, however it was also noted that even the marks obtain in OSCE; to translate into clinical competence, it depends on several factors including the department:

..For me, I think the OSCE exam for particular departments translates to your competence because it assesses a wide variety of areas that have been covered... (FGD 002, Participant 3)

another student added

..Long case does not translate to competence because it depends on who’s examining, which is subjective... (FGD 002, Participant 6)

Nonetheless, examiner variability was identified as a factor that could affect consistency. While some examiners strictly adhered to the marking rubrics, others were perceived as lenient or overly critical, leading to concerns about grading discrepancies.

..Some examiners are very strict, while others are more lenient, and this can create differences in the marks scored... (FGD 003, Participant 2)

Comparison with Other Assessment Methods

Compared to traditional assessment formats such as multiple-choice questions (MCQs), essay-based exams, and long case clinical assessments, OSCE was seen as more practical and skill-oriented as highlighted in Table 6. Many students appreciated its emphasis on hands-on skills rather than rote memorization.

..MCQs test knowledge, but they don’t show whether you can actually apply it. OSCE forces you to perform the skills, which is more relevant for clinical practice... (FGD 001, Participant 2)

Table 6 Comparative Perception of Different Assessment Methods

However, some students found the long-case assessment method, which involves extended interaction with a single patient, to be more reflective of real clinical encounters than the fragmented nature of OSCE stations.

..In a long case, you get to spend time with the patient, gather a full history, and make a diagnosis, just like in the hospital. The OSCE is good, but it’s more fragmented... (FGD 005, Participant 4) another student noted out its biasness

...long case actually has a lot of bias... (FGD 001, Participant 4) another student echoed ...long case, it mostly favors some people... (FGD 001, Participant 1)

Finally one student noted out examiners not listening to the entire information

..My experience with a long case is that after taking all my time to clerk a patient the examiner does not take time to listen to what I am telling them... (FGD 002, Participant 5)

Challenges and Recommendations for Improvement

While the OSCE was widely regarded as an effective assessment tool, students identified several areas for improvement, including logistical issues, time constraints, and examiner variability. They proposed enhancements such as increasing practice sessions before the actual OSCE, standardizing examiner training, and integrating a feedback mechanism to help students understand their performance.

..If we could get structured feedback after the OSCE, it would help us know where we went wrong and how to improve... (FGD 004, Participant 4)

Some students suggested use of more subjective methods like long case by more experiences clinical instructors

..Long case exam should only involve senior lecturers who know what a student at their level is supposed to know... (FGD 002, Participant 3)

Additionally, students suggested complementing OSCE with other assessment methods, such as direct clinical observation during rotations, to ensure a more comprehensive evaluation of their competencies.

…OSCE should be used for sensitive exams like end of semester, as it puts all students at the same ground… (FGD 006, Participant 2)

Students Suggested Use of OSCE for Holistic Assessment of Clinical Competence

..Blend long case, OSCE, and viva examinations to provide a holistic assessment of students’ clinical competence... (FGD 001, Participant 5)

Instructors’ Perception of OSCE

As shown in Table 7, Majority of instructors 97.8% (n = 45) agree that OSCE is helpful to be part of the curriculum, while 78.3% (n = 36) admit to its contribution towards assessment of psycho-motor skills. Furthermore, 84.8% (n = 39) agree that it builds the confidence of students in clinical practice. OSCE is also valued as enabling faculty members evaluate they are level of knowledge as noted by 67.4% (n = 31) and 58.7% (n = 27) admitting that it helps faculty members to assess their own psycho-motor skills. In fairness, 71.8% (n = 33) consider OSCE to be objective, while 58.7% (n = 27) believe it is fair to all students. Most instructors 76.1% (n = 35) support utilizing it as a summative or blended summative-formative assessment. However, 58.7% (n = 27) note that OSCE is harder to prepare for than traditional procedures, even though 93.4% (n = 43) say that they feel confident in giving it.

Table 7 Instructors’ Perception of OSCE

OSCE is generally regarded as interesting 89.2% (n = 41) and thorough 71.8% (n = 33) and easy to pass 54.3% (n = 25). Nearly half 47.9% (n = 22) find it less stressing and 47.8% (n = 22) find it exhausting and lengthy. However, 84.8% (n = 39) believe it enhances evaluation methods, and 89.1% (n = 41) find it a valuable learning experience for both lecturers and students.

Qualitative Results

General Perception of OSCE

Clinical instructors overwhelmingly endorsed OSCE as a structured, objective, and comprehensive assessment tool. They highlighted its ability to reduce examiner bias through standardized stations and multiple assessors, in contrast to traditional assessments like long or short cases.

..It gives you at least some objectivity…where many examiners are involved, you reduce a bit the biasness... (KII, Lecturer)

Another instructor emphasized

..It is standardized, so every student is assessed on the same skill…it is fair. (KII, Senior Lecturer)

Many instructors also emphasized that OSCE allows for broad coverage of clinical competencies, including communication, procedural, diagnostic, and behavioral skills.

..You’re able to assess different conditions from a student…it’s not just one thing... (KII, Senior Lecturer), a lecturer affirmed ...OSCE is more clinical and holistic...integrating physical exam, diagnosis, and management... (KII, Lecturer)

Comparative Advantages Over Traditional Methods

Instructors favored OSCE over traditional methods for being more time-efficient, objective, and inclusive of multiple clinical scenarios.

..OSCE is labour intensive but time-efficient…in three hours, you can assess 30 students... (KII, Teaching assistant) ...It’s more objective than long cases, where bias and first impressions can heavily influence grades... (KII, Lecturer)

Challenges and Barriers to Implementation

Despite its strengths, instructors noted several barriers to the effective implementation of OSCE:

  • Human resource limitations: Many instructors pointed out the difficulty in securing enough trained staff for multiple stations.

..You need 10 lecturers to manage 10 stations…manpower is a big issue... (KII, Senior Lecturer) ...We must mobilize examiners which is sometimes hard... (KII, Lecturer)

Environmental constraints: Congested wards, noise, and overlapping hospital activities disrupt OSCE setups.

..The mix-up of patients, nurses, and exams creates noise and disorganization… (KII, Teaching Assistant) ...We interrupt the normal work in the wards... (KII, Assistant Lecturer)

  • Preparation and infrastructure: The need for adequate facilities, patients, and equipment was a common concern.

..Sometimes conditions for OSCE stations aren’t realistic due to lack of resources... (KII, Lecturer)

Student-Related Issues

Instructors identified student anxiety, unfamiliarity with OSCE format, and poor time management as major factors affecting performance.

..Some students panic and get disorganized after tough first stations… (KII, Lecturer) ...Students often complain that time is too short…or they haven’t practiced enough... (KII, Senior Lecturer)

Factors Influencing Performance

Instructors also highlighted several factors influencing student performance during OSCEs. Student preparedness emerged as a critical factor, with instructors noting that students who have exposure to OSCE-style assessments before their exams tend to perform better. Dr. Oreb explained,

..Students who practice OSCE beforehand perform better... (KII, Lecturer)

Environmental factors also play a role in influencing performance. In particular, noise and distractions from other ward activities were mentioned as factors that could negatively impact a student’s ability to focus during the examination. As one clinical instructor observed,

..Noise from ward activities affects student concentration... (KII, Teaching Assistant)

Perceived Impact on Clinical Competence

There was consensus that OSCE, while not perfect, provides a credible estimate of clinical competence.

..It gives a sneak peek into the student’s abilities…better than long case or theory... (KII, Lecturer) ...If tailored to assess real clinical skills, OSCE can assess competence well... (KII, Assistant Lecturer)

Still, some instructors noted that OSCE alone is not enough to determine long-term clinical proficiency and should be complemented by internships and repeated practice.

..Competence is acquired through repetition...OSCE just determines baseline skills... (KII, Senior Lecturer)

Recommendations for Improvement

Instructors provided several actionable suggestions to enhance OSCE implementation including: Increasing manpower and faculty training, use realistic scenarios and improve logistical planning, expose students to OSCE-style assessment early and often, ensure better examiner conduct and communication, provide transparent rubrics and preparatory briefings with these clinical instructors provided several suggestions for improving the OSCE experience. One key recommendation was the need for training workshops for examiners to improve their understanding and implementation of OSCE best practices. One instructor emphasized

..Examiners should be trained in OSCE best practices... (KII, Senior Lecturer)

Additionally, instructors suggested the creation of dedicated OSCE spaces to improve the examination environment. As a clinical instructor noted,

..We need a dedicated OSCE space to avoid ward disruptions... (KII, Lecturer)

Other instructors added

Train lecturers on good practices…and use blueprints, checklists, and global standards. (KII, Senior Lecturer Expose students to OSCE during rotations so it’s not new to them. (KII, Lecturer)

Discussion

This multi-institutional, multi-country study examined undergraduate medical and nursing students and clinical instructors’ perception towards the OSCE. Both instructors and students generally have a positive perception towards OSCE as revealed by the findings. OSCE was appreciated for its structured nature, objectivity, and practical skills focus, plus elimination of bias. This is in agreement with a study from a Teaching Hospital in Ethiopia, OSCE ensures that the examiner bias is minimized using standardized tasks and scoring. Unlike traditional assessment methods like longcase, Vivas, short case among others that are more reliant on one examiner thus more prone to bias and subjectivity, OSCE has multiple stations with different examiners as reechoed by respondents in the FGDs and KIIs, thus minimizing the effect of individual examiner biases. By so doing, the impartiality of the evaluation is enhanced as tasks are spread across several assessors and stations, thus minimizing subjectivity while providing more reliable results.19 Further, our results highlighted a significant difference in perception between medical and nursing students.

However, examiner-related factors still contribute to variability in scoring. Studies have found that examiner stringency or leniency led to differences on OSCE scores leading to discriminatory grading.20 This is consistent with our study were students and clinical instructors showed notable impact of examiners towards performance especially in traditional assessment methods like long case and on OSCE stations with an examiner. This variability was noted to lead to false positive or negative results at individual stations, though the overall exam-level impact is reduced.20 To counteract such biases, measures like implicit bias training, frame-of-reference training, and using multiple examiners per station have been recommended.21 Further recommendations from our study included training of clinical instructors on how to conduct OSCE.

The shortcomings outlined in this study are not specific to SSA. Across the globe, time pressure has been a constant problem with the administration of the OSCE. Hodges et al (1999), in one such research, determined that the set time at stations cannot capture the richness of some tasks.6 This was also the case with the students in our study. Most students indicated that the time provided was not enough, especially for stations calling for both procedural and cognitive participation, resulting in incomplete demonstrations of proficiency.

Furthermore, our research identified that a considerable number of lecturers 93.4% were able to prepare and use OOSCE which was not reflective of a study from Ethiopia were it was found that only 23% of clinical instructors had ever been trained in the use of OSCE,19 the difference could have been caused by increased research on OSCE which could have culminated into increased awareness and trainings that have been adopted to the different settings and academic programs.4,22 Further, there were reported differences in examiners behaviors during assessments, this inconsistency of examiner behavior has been reported elsewhere as well.23,24 Structured training of examiners and the use of global rating scales as well as checklists have been recommended in most settings to enhance scoring consistency and more accurately capture clinical reasoning and communication skills, however analytical global rating has been shown to show substantially higher internal rating than checklists.24,25 These interventions may be particularly relevant to SSA institutions, where they are not yet institutionally prevalent, but with training of instructors as suggested by the clinical instructors, this will help enhance objectivity during evaluation.

OSCE-related anxiety, particularly among students, is a prevalent phenomenon that has been widely documented. In our study, 80.8% of students indicated that anxiety affected their performance, mirroring a study by Kim (2016) and others, where learners described a high prevalence of anxiety during OSCE.26 This may be due to the high-stake nature of OSCE, midwives were found to have more anxiety in traditional assessment method than with OSCE.27 This is indicative of the differences in anxiety levels within different health professions student as also noted in this study. This shows a need for supportive environments, prior-exam preparation, and psychological conditioning as a key part of OSCE planning.

Logistical challenges seen in our study, such as deficits of materials required, poor station organization, and delayed communication, are indicative of larger systemic issues with SSA health education programs. These challenges are not unique to SSA; similar resource constraints have been observed in North Africa, Southeast Asia and Ethiopia among other places, affecting the effectiveness of OSCE and potentially compromising its validity and consistency.28–30 While the use of checklists and standardized patients is meant to ensure maximum consistency, logistical differences may reduce the validity of the exam and introduce unintended bias. This was evidenced by Hodges et 1999 who noted that binary checklists may not be valid measures of increasing clinical competence.6

Finally, students noted a lack of instant feedback, an area that has been given increased attention in more recent literature. Within environments of plenty, the addition of immediate or structured feedback following OSCEs has proven to increase student performance and satisfaction. A study evaluating immediate feedback during OSCEs found that students who received 2 minutes of feedback after a 4-minute examination showed a substantial improvement in performance. This approach was deemed practical and beneficial for enhancing competency in criterion-based tasks, and both students and examiners valued it as a learning tool.31,32 Our findings suggest that SSA health training institutions would be capable of valuing added feedback mechanisms, not only to guide student development but also to complement the formative value of OSCE.

Notably, instructors predominantly reacted to the significance of OSCE in evaluating cognitive and psychomotor skills and in facilitating student confidence building. These sentiments align with European and Asian research where instructors valued OSCEs for ensuring students’ readiness for clinical practice.33 However, it was also noted that preparing and administering OSCEs requires significant time and effort, as highlighted by Lavery (2022) in research on advanced nurse practitioner exams.34 These challenges are further echoed by other studies documenting the labor-intensive nature of OSCE preparation and delivery.35 With all these challenges, Shrivastava 2021 acknowledges the fact that OSCE can be of immense significance in monitoring the learning and its progression,35 thus making it very essential to ensure its quality.

Digital OSCE designs promise scalability and remote access, which could be advantageous for institutions facing logistical constraints. However, previous experiences during the COVID-19 pandemic highlighted challenges such as technical failures and inequalities in access to equipment and reliable internet. These issues are particularly relevant for Sub-Saharan African (SSA) institutions, where digital infrastructure remains a significant barrier. Any move to e-OSCE would require careful piloting, substantial investment in technology, and faculty capacity development to address these challenges effectively.33,36

This research also emphasizes the need for more targeted strategies based on varying educational contexts for medical and nursing students. Our results showed that there exists a significant deference in the perception of nursing and medicine students towards evaluation methods including OSCE, current literature suggests that these two groups might have variant OSCE experiences due to differences in curricula and clinical exposure.33,35,37 This calls for more targeted interventions towards different health professionals students noting the differences in their perception of evaluation methods.

The implications for policy and institutional practice are clear, to make OSCE more reliable and effective, institutions need to focus on several key areas. These include developing validated question banks, providing examiners with rigorous assessment and communication training, and ensuring infrastructure support for consistent OSCE delivery. Literature emphasizes that incorporating mock OSCE, pre-assessment orientation, and post-assessment feedback sessions can significantly enhance student performance and their perception of fairness.37 Mock OSCE, in particular, has been shown to improve confidence and preparedness for summative assessments.38 Policymakers should consider investing in clinical skills centers and faculty development programs focused on evaluation. Evidence suggests that such investments can improve the quality of OSCE implementation, as seen in Taiwan following the announcement of high-stakes OSCE requirements.39 Faculty development programs are essential to address challenges related to examiner variability and ensure standardization in assessment practices. Further, our study noted out the need for training of instructors in respect to setting and administering OSCE, providing dedicated rooms for OSCE to avoid disrupting work on wards and students being interrupted by the congested ward environment. Finally, students agreed that no method alone is sufficient to test for practical skills, they thus suggested a mixed method assessment as the best way to test for practical skills.

Strengths and Limitations of This Study

The primary strength of this research is its large multi-country sample size spanning five institutions in Sub-Saharan Africa, offering broad insight into OSCE experiences in diverse settings. It involved both students and clinical instructor viewpoints and triangulated quantitative data with qualitative focus group results, which increased the richness and validity of findings. Limitations include its reliance on self-report data with potential social desirability and recall bias.

Conclusion

Overall, this study provides support for continued use and development of OSCE as a tool for the measurement of clinical competence across SSA. While OSCE is widely deemed to be all-encompassing, fair, and representative of clinical practice demands, its effectiveness is tempered by local contextual conditions like resource availability, examiner inconsistency, and student tension. Standardizing OSCE procedures in SSA to international best practice standards factoring in local requirements will play a significant role in augmenting health professions education in SSA. No method alone is sufficient in testing for clinical competence, a mixed method assessment is the best way to test for practical skills.

Abbreviations

OSCE, Objective Structured Clinical Examination.

Data Sharing Statement

The datasets generated and/or analyzed during the current study are available from the corresponding author on reasonable request.

Ethics Approval and Consent to Participate

Ethical approval was provided by Busitema University Research Ethics Committee (BUFHS-2023-132) and administrative clearance from the rest of the participating institutions. Prior to obtaining ethical clearance from the respective bodies, a Memorandum of Understanding was signed by the Vice Chancellors of the participating institutions, providing the framework for the inter-institutional collaboration on this study.

In line with the Declaration of Helsinki, informed consent was obtained from all participants, and their confidentiality and anonymity were strictly maintained. Participation was entirely voluntary, with no consequences for withdrawal at any stage. The study posed no harm to participants, and they were fully informed about its purpose, procedures, potential risks, and benefits. All data collected were securely stored and used exclusively for research purposes, ensuring adherence to the highest ethical standards throughout the study.

Consent for Publication

All authors have reviewed the final manuscript and provided their consent for publication.

Acknowledgments

The authors extend their gratitude to AFREhealth and the participating universities for their support in conducting this research. Special thanks goes to the students who participated in this study; their willingness to participate made these insights possible.

Also, we appreciate the research assistants who supported the work in one way or the other namely, Asiimwe Winnie Catherine, Nakawuka Betty, Oluwatosin Akomolade, Francisca Omowaare. Chiemerigo Bright, for their contributions towards the success of the work.

Funding

This study was supported by grant 1R25TW011217 from the US National Institutes of Health (NIH)/Fogarty International Center (FIC) which also includes co-funds from the US Department of State’s Office of the US Global AIDS Coordinator and Health Diplomacy (S/GAC) and the President’s Emergency Plan for AIDS Relief (PEPFAR) to the African Forum for Research and Education in Health (AFREhealth).

Disclosure

The authors declare that they have no competing interests in this work.

References

1. Zayyan M. Objective structured clinical examination: the assessment of choice. Oman Med Specialty Board Oman Med J. 2011;26:219

2. Harden RM, Mary Stevenson WWD, Wilson GM. Assessment of clinical competence using objective structured examination. Med Educ. 1975;22:447–51

3. Dewan P, Khalil S, Gupta P. Objective structured clinical examination for teaching and assessment: evidence-based critique. Clin Epidemiol Global Health. 2024;25:101477. doi:10.1016/j.cegh.2023.101477

4. Rayyan MR. The use of objective structured clinical examination in dental education- a narrative review. Front Oral Health. 2024;5. doi:10.3389/froh.2024.1336677.

5. Pérez Baena AV, Sendra Portero F. The objective structured clinical examination (OSCE): main aspects and the role of imaging. Radiología. 2023;65:55–65. doi:10.1016/j.rx.2022.09.010

6. Hodges B, Regehr G, McNaughton N, Tiberius R, Hanson M. OSCE checklists do not capture increasing levels of expertise. Research Report. 1999;74:1129–34

7. EU. Guidance on outcomes for joint clinical assessments. (2024).

8. World Bank. In the Middle East and North Africa, a Very Gradual Shift Away From Caring Only About Exam Results; 2014.

9. Akhigbe T. Summative objective structured clinical examination assessment: a mini review. Int J Med Rev. 2018;5:140–142. doi:10.29252/IJMR-050402

10. Cerna S. Outcomes following management of congenital and acquired diaphragmatic hernia in a tertiary care centre-a case series. J Contemporary Med Edu. 2023;13:49–131

11. Abd El-Nasser Ali G, Yahya Mehdi A, Abdel-Kader Ali H. Objective Structured Clinical Examination (OSCE) as an assessment tool for clinical skills in Sohag University: nursing students’ perspective. J Environ Studies. 2012;8:59–69

12. Susan F. The Objective Structured Clinical Exam (OSCE): a qualitative study exploring the healthcare student’s experience. Stud engagement exp j2012:1–8

13. Kiguli S, Mubuuke R, Baingana R, et al. A consortium approach to competency-based undergraduate medical education in Uganda: process, opportunities and challenges. Educ Health. 2014;27:163–169. doi:10.4103/1357-6283.143774

14. Bvumbwe T, Mtshali N. Nursing education challenges and solutions in Sub Saharan Africa: an integrative review. BMC Nurs. 2018;17. doi:10.1186/s12912-018-0272-4

15. Kouladoum JC. Inclusive education and health performance in Sub Saharan Africa. Soc Indic Res. 2023;165:879–900. doi:10.1007/s11205-022-03046-w

16. Khan KZ, Ramachandran S, Gaunt K, Pushkar P. The Objective Structured Clinical Examination (OSCE): AMEE Guide No. 81. Part I: an historical and theoretical perspective. Med Teach. 2013;35.

17. Patrício MF, Julião M, Fareleira F, Carneiro AV. Is the OSCE a feasible tool to assess competencies in undergraduate medical education? Med Teach. 2013;35:503–514. doi:10.3109/0142159X.2013.774330

18. Abdul S, Bukhari R, Ali M. Sample size determination using Krejcie and Morgan table. 2021. doi:10.13140/RG.2.2.11445.19687

19. Fisseha H, Desalegn H. Perception of students and examiners about objective structured clinical examination in a teaching hospital in Ethiopia. Adv Med Educ Pract. 2021;12:1439–1448. doi:10.2147/AMEP.S342582

20. Chong L, Taylor S, Haywood M, Adelstein B-A, Shulruf B. The sights and insights of examiners in objective structured clinical examinations. J Educ Eval Health Prof. 2017;14(34):34. doi:10.3352/jeehp.2017.14.34

21. Tavakol M, Stewart C, Sharpe CC. Ensuring fairness in assessment in health professions education: rapid analysis tools to detect differential item functioning across groups. Int J Med Educ. 2024;15:80–83. doi:10.5116/ijme.6694.de69

22. Hijazi M, Downing SM. Objective structured clinical examinations as an assessment method in residency training: practical considerations. Ann Saudi Med. 2008;28:192–199. doi:10.5144/0256-4947.2008.192

23. Yudkowsky R, Park YS, Riddle J, Palladino C, Bordage G. Clinically discriminating checklists versus thoroughness checklists: improving the validity of performance test scores. Acad Med. 2014;89:1057–1062. doi:10.1097/ACM.0000000000000235

24. Hodges B, McIlroy JH. Analytic global OSCE ratings are sensitive to level of training. Med Educ. 2003;37:1012–1016. doi:10.1046/j.1365-2923.2003.01674.x

25. Pell G, Homer M, Fuller R. Investigating disparity between global grades and checklist scores in OSCEs. Med Teach. 2015;37:1106–1113. doi:10.3109/0142159X.2015.1009425

26. Kim K-J. Factors associated with medical student test anxiety in objective structured clinical examinations: a preliminary study. Int J Med Educ. 2016;7:424–427. doi:10.5116/ijme.5845.caec

27. Faramarzi M, et al. Test anxiety in objective structured clinical examinations (OSCEs) compared with traditional assessment methods in undergraduate midwifery students. Health N Hav. 2013;05:2204–2209.

28. Azevedo MJ. The state of health system(s) in Africa: challenges and opportunities. African Histories Modernities. 2017;1–73. doi:10.1007/978-3-319-32564-4_1

29. Getu A, Worku S, Asaminew T. Experience and Challenges of Objective Structured Clinical Examination (OSCE): perspective of students and examiners in a clinical department of Ethiopian University. Ethiop J Health Sci. 2020;30:417–426. doi:10.4314/ejhs.v30i3.13

30. Muller J, Reardon C, Hanekom S, et al. Training for transformation: opportunities and challenges for health workforce sustainability in developing a remote clinical training platform. Front Public Health. 2021;9. doi:10.3389/fpubh.2021.601026

31. Hodder RV, Rivington RN, Calcute LE, Hart IR. The effectiveness of immediate feedback during the objective structured clinical examination. Med Educ. 1989;23:184–188. doi:10.1111/j.1365-2923.1989.tb00884.x

32. Ngim CF, Fullerton PD, Ratnasingam V, et al. Feedback after OSCE: a comparison of face to face versus an enhanced written feedback. BMC Med Educ. 2021;21. doi:10.1186/s12909-021-02585-z

33. Ba H, Zhang L, He X, Li S. Knowledge mapping and global trends in the field of the objective structured clinical examination: bibliometric and visual analysis (2004-2023). JMIR Med Educ. 2024;10:e57772. doi:10.2196/57772

34. Lavery J. Observed structured clinical examination as a means of assessing clinical skills competencies for advanced nurse practitioners (ANPs). Br.J. Nurs. 2022:214–20

35. Shrivastava SR, Shrivastava PS. Employment of objective structured clinical examination tool in the undergraduate medical training. J Sci Soc. 2021;48:145–148. doi:10.4103/jss.jss_34_21

36. Said Elshama S. How to design and apply an Objective Structured Clinical Examination (OSCE) in medical education? Iberoamerican Jf Med. 2021;01:51–55.

37. Al-Hashimi K, Said UN, Khan TN. Formative Objective Structured Clinical Examinations (OSCEs) as an assessment tool in UK undergraduate medical education: a review of its utility. Cureus. 2023. doi:10.7759/cureus.38519

38. Furmedge DS, Smith LJ, Sturrock A. Developing doctors: what are the attitudes and perceptions of year 1 and 2 medical students towards a new integrated formative objective structured clinical examination? BMC Med Educ. 2016;16. doi:10.1186/s12909-016-0542-3

39. Lin CW, Tsai TC, Sun CK, Chen DF, Liu KM. Power of the policy: how the announcement of high-stakes clinical examination altered OSCE implementation at institutional level. BMC Med Educ. 2013;13(8). doi:10.1186/1472-6920-13-8

Creative Commons License © 2025 The Author(s). This work is published and licensed by Dove Medical Press Limited. The full terms of this license are available at https://www.dovepress.com/terms.php and incorporate the Creative Commons Attribution - Non Commercial (unported, 4.0) License. By accessing the work you hereby accept the Terms. Non-commercial uses of the work are permitted without any further permission from Dove Medical Press Limited, provided the work is properly attributed. For permission for commercial use of this work, please see paragraphs 4.2 and 5 of our Terms.