Back to Journals » Journal of Multidisciplinary Healthcare » Volume 18

Translation, Cross-Cultural Adaptation and Reliability and Validity Studies of the Chinese Version of the Evidence-Based Practice and Evidence-Informed Practice Questionnaire

Authors Luan S , Song J , Ruan B, Zhu Y, Liu D , Qian J

Received 17 January 2025

Accepted for publication 2 May 2025

Published 12 May 2025 Volume 2025:18 Pages 2661—2679

DOI https://doi.org/10.2147/JMDH.S518183

Checked for plagiarism Yes

Review by Single anonymous peer review

Peer reviewer comments 2

Editor who approved publication: Dr David C. Mohr



Shuo Luan,1 Jia Song,2 Bin Ruan,1 Yuetong Zhu,1 Dongsen Liu,1 Jinghua Qian1

1School of Sports Medicine and Rehabilitation, Beijing Sport University, Beijing, People’s Republic of China; 2Department of Evidence-Based Medicine and Clinical Epidemiology, West China Hospital, Sichuan University, Chengdu, People’s Republic of China

Correspondence: Jinghua Qian, School of Sports Medicine and Rehabilitation, Beijing Sport University, Beijing, People’s Republic of China, Email [email protected]; Dongsen Liu, School of Sports Medicine and Rehabilitation, Beijing Sport University, Beijing, People’s Republic of China, Email [email protected]

Purpose: To translate and culturally adapt the Evidence-Based Practice and Evidence-Informed Practice Questionnaire into the Chinese version and evaluate its psychometric properties.
Methods: The process of translation and cross-cultural adaptation adhered to the established guidelines, followed by psychometric evaluation that assessed floor/ceiling effects, face validity, content validity, construct validity, internal consistency, and test–retest reliability. The evaluation engaged 5 experts and 279 students (170 undergraduates and 109 postgraduates) from Beijing Sport University. Questionnaire items were categorized according to Evidence-Based Practice (EBP) and Evidence-Informed Practice (EIP) concepts, with varying response options for degree and frequency. Forty students completed the Chinese version of the questionnaire for the second time after a two-week period.
Results: No floor or ceiling effects were observed. Following the revision of item 32 and deletion of item 14, the Item-Level Content Validity Index (I-CVI) for all other items ranged from 0.80 to 1.00, with an average scale-level CVI (S-CVI/Ave) of 0.91. The final Chinese version of the questionnaire consists of 52 items, showing adequate internal consistency, with Cronbach’s alpha values of 0.78, 0.86, 0.86, and 0.89 for the EBP (degree and frequency) and EIP (degree and frequency) items, respectively. Comparison between test–retest scores produced significant differences in all items, with Spearman correlation coefficient (r) ranging from 0.33 to 0.80 (p< 0.05), except for item 16 (r=0.29, p=0.065). Exploratory Factor Analysis (EFA) results indicated Kaiser–Meyer–Olkin (KMO) values for EBP degree, EBP frequency, and EIP degree items ranging from 0.78 to 0.87. Bartlett’s test of sphericity yielded significant results, explaining 63.62%, 69.35%, and 70.91% of the total variance, respectively, after removing items 23 and 42– 44 (cross-loading items).
Conclusion: The Chinese version of the Evidence-Based Practice and Evidence-Informed Practice Questionnaire demonstrated good psychometric properties for assessing the effectiveness of EBP and EIP educational programs tailored for physiotherapy and exercise rehabilitation students.

Keywords: evidence-based practice, evidence-informed practice, cultural adaptation, psychometric properties, physiotherapy

Background

Evidence-based practice (EBP) and evidence-informed practice (EIP) are fundamental concepts in the application of evidence-based approaches in public healthcare.1,2 EBP in healthcare is a relatively well-established concept that aims to continually enhance the quality of care, patient safety, and clinical outcomes, ultimately improving healthcare decision-making. The concept of EIP further develops the conventional idea of EBP, which has been enriched and revised successively by several scholars.3–5 The concept of EIP was further proposed as,

Practitioners are encouraged to be knowledgeable about findings coming from all types of studies and to use them in an integrative manner, taking into consideration clinical experience and judgment, clients’ preferences and values, and context of the interventions.3

The consensus among scholars is growing that evidence should guide the integration of evidence into practice, rather than being the exclusive basis for it.6,7 Substantial variances in concepts between EBP and EIP have been identified and explained in greater detail.5,7–9 On the other hand, patient-centered medical care is a prevalent trend in modern healthcare, where patient-reported outcome measurement (PROM) tools are vital for evaluating treatment efficacy. Progress in EBP and EIP research has contributed to improved quality of care and increased patient satisfaction.10,11

In recent years, a growing body of research has confirmed the crucial role of educational interventions in improving knowledge, attitudes, understanding, and behavior concerning EBP and EIP.12 This importance extends beyond nursing education to other health-related professions like physiotherapy, Chinese medicine, and chronic disease prevention programs, emphasizing the value of educational interventions in enhancing students’ future clinical skills.2,5,13,14 The significance of EBP and EIP in physiotherapy and related fields is consistently underscored.15,16 It is imperative for individuals intending to pursue careers in physiotherapy to acknowledge the scientific nature and complexity of knowledge translation.1 Previous studies have shown that robust EBP and EIP educational programs, which offer students the necessary knowledge, skills, and confidence, can enhance students’ inclination towards EBP/EIP and satisfactory clinical practice ability post-graduation.5,17 Conversely, evaluating students’ extensive feedback can help educators craft tailored curricula to meet their instructional objectives.18

Consequently, a substantial demand exists for a quantitative and objective assessment tool to gauge the effectiveness of specialized educational programs on EBP and EIP in enhancing understanding, attitudes, knowledge, and behaviors regarding the incorporation of evidence into practice.5 It is also crucial to have a reliable and quantitative tool for assessing the educational outcomes of physiotherapy and related educational programs that incorporate both the EBP and EIP dimensions. In 2023, Kumah et al developed and validated the Evidence-Based Practice and Evidence-Informed Practice Questionnaire, a comprehensive tool for assessing the knowledge, attitudes, understanding, and behavior of undergraduate preregistration students in nursing and allied health disciplines.19 The original validated questionnaire consisted of 53 items (8 demographic items, 25 EBP items, and 20 EIP items). The questionnaire primarily used a Likert scale for responses, except for the demographic section and the binary questions (Yes/No) concerning EBP and EIP concepts. Items related to EBP and EIP were grouped based on response options such as “strongly agree”, “agree”, “neutral”, “disagree”, and “strongly disagree” for degree, and “daily”, “every other day”, “weekly”, “monthly”, and “never” for frequency. Specifically, the EBP degree section consisted of 13 items, the EBP frequency section included 7 items, the EIP degree section comprised 12 items, and the EIP frequency section contained 3 items. To our best knowledge, no formal translation or psychometric testing have been carried out on the Chinese version of the Evidence-Based Practice and Evidence-Informed Practice Questionnaire.

The study aimed to translate and culturally adapt the Evidence-Based Practice and Evidence-Informed Practice Questionnaire into Chinese. Subsequently, we sought to assess its psychometric properties, encompassing both reliability and validity, among Chinese physiotherapy and exercise rehabilitation students. This study addresses several key gaps. First, it is anticipated that this questionnaire will serve as a valuable tool for healthcare educators and clinical practitioners. Second, this tool will also be validated with students in various medical disciplines, enabling the assessment of short-term and long-term efficacy of EBP and EIP curricula. Last, as EBP and EIP frameworks become increasingly integrated into medical education programs worldwide, this study is poised to raise awareness about their application in education and clinical practice, thereby enhancing overall healthcare quality.

Methods

Study Design

Approved by the Ethics Committee of Beijing Sport University (ID: 2024364H), the study has been registered on the Chinese Clinical Trial Registry (ID: ChiCTR2400090943). This study was conducted from October 26 to December 28, 2024, and followed the principles of the Declaration of Helsinki.

This study was carried out in two stages. The first stage involved initially translating and cross-culturally adapting the Evidence-Based Practice and Evidence-Informed Practice Questionnaire into Chinese following international recommended guidelines and methodology.20,21 A six-step translation process was undertaken to adapt the scale into Chinese, involving (1) obtaining formal permission from the original developer, (2) translation and synthesis, (3) back translation, (4) expert committee review, (5) pilot testing of the pre-final version, and (6) submission of the final version. In the second stage, a prospective assessment was carried out to evaluate the essential psychometric properties of the Chinese version of the questionnaire, including both validity (face validity, content validity, and construct validity) and reliability (test–retest reliability and internal consistency) tests.

Study Procedure

Translation and Cross-Cultural Adaptation of the Evidence-Based Practice and Evidence-Informed Practice Questionnaire into Chinese

Phase 1: Obtaining Formal Permission

We sought approval for cross-cultural adaptation and research cooperation by reaching out to the original questionnaire developer, Professor Elizabeth Adjoa Kumah from the United Kingdom via Email and acquiring formal written permission.19

Phase 2: Translation and Synthesis

The comprehensive structure of the Evidence-Based Practice and Evidence-Informed Practice Questionnaire includes survey instructions that begin with defining terms to provide respondents with an overview of EBP, EIP, and other relevant terms before delving into the main body of the questionnaire. The questionnaire takes approximately 10 minutes to complete and the respondents are asked to tick/answer appropriately. During this phase, the original version was translated into Chinese by two native Chinese speakers who are also bilingual with English as their second language. Translator 1, a rehabilitation physician and a faculty member heading the Evidence-Based Program at Beijing Sport University, and Translator 2, a postgraduate student from the Department of Evidence-based Medicine and Clinical Epidemiology at Sichuan University, were responsible for the translation. Any discrepancies between the two translations were resolved through discussion between the translators and researchers, resulting in a documented synthesis of the two versions.

Phase 3: Back Translation

Two native English speakers, proficient in Chinese as a second language, were tasked with independently translating the Chinese version back into English. These translators are graduate students currently pursuing their studies at Beijing Sport University, and both have been studying Chinese for over five years. One of them has a background in healthcare, while neither translator has prior knowledge of the content in the original English version. The researchers recorded the unintentional omissions, additions, or changes in meaning evident in the back translation process.

Phase 4: Expert Committee Review

Comprising all four translators, two EBP curriculum instructors, one physiotherapist, and one athletic trainer, the Expert Committee evaluated every version of the Evidence-Based Practice and Evidence-Informed Practice Questionnaire formed in the initial three phases. Through this process, these experts synthesized diverse perspectives to achieve agreement on a pre-final draft version.

Phase 5: Pilot Testing of the Pre-Final Version

To enhance conceptual, semantic, and content clarity of the translated Chinese version, a preliminary evaluation step was conducted to ensure comprehensibility before initiating psychometric testing. Ten participants were recruited to assess the pre-final version, followed by interviews to collect feedback on their comprehension of the instructions, items, and corresponding response options. Face validity was assessed via this pilot testing. Any elements in the instrument/items that were unclear to more than 20% of the participants were reexamined.

Phase 6: Submission of Final Version

The Chinese version of the Evidence-Based Practice and Evidence-Informed Practice Questionnaire, used for further psychometric testing, was submitted to the original developer, Professor Elizabeth Adjoa Kumah, via email. A supportive statement elucidating cultural adaptation accompanied the questionnaire.

Psychometric Properties the Chinese Version of Evidence-Based Practice and Evidence-Informed Practice Questionnaire

Samples and Setting

Respondents, comprising undergraduate and postgraduate students majoring in physiotherapy, exercise rehabilitation, and sports medicine, were recruited from the School of Sports Medicine and Rehabilitation at Beijing Sport University, China, through face-to-face interviews, posters, and social media. The inclusion criteria required participants to be native Chinese speakers and willing to provide formal informed consent. The exclusion criteria included significant visual or auditory impairments affecting evaluation or participating in other relevant studies. Respondents were assured of confidentiality, anonymized data collection, and blinded data analysis. Access to data was restricted to authorized personnel to ensure participant privacy.

Both web-based and paper-based questionnaires were administered, commencing with study consent and survey instructions. Respondents then completed the Chinese version of the Evidence-Based Practice and Evidence-Informed Practice Questionnaire, with completion time recorded.

Sample Size

The study employed Gorsuch’s respondent-to-item ratio to determine the sample size, with five respondents allocated per questionnaire item for validation.22 Thus, we expected to recruit at least 265 (53 * 5) respondents for the validation study. Taking into account the possibility of 5% invalid questionnaires (in cases of missing key information or data), the final sample size was 279.

Quantitative Research Method

Floor/Ceiling Effects

The presence of floor/ceiling effects was determined by assessing whether over 15% of respondents attained the minimum or maximum score on the Chinese version of the Evidence-Based Practice and Evidence-Informed Practice Questionnaire.23

Content Validity

Content validity is assessed based on how accurately a measurement tool captures the specific concept’s targeted aspects. The Content Validity Index (CVI) is frequently used for quantitative assessment, facilitating calculations at both the item level (I-CVI) and the scale level (S-CVI). In this study, a panel of five experts with varied backgrounds and expertise, comprising a psychologist, a professional teacher of EBP, a rehabilitation physician, an athletic trainer, and a physiotherapist, appraised the content validity. Content experts were tasked with individually evaluating the relevance of each tool item on a 4-point Likert scale (1=not relevant, 2=somewhat relevant, 3=relevant, 4=very relevant). The I-CVI calculation involves dividing the number of experts who rated an item as relevant or clear (with a rating of 3 or 4) by the total number of experts. Items with an I-CVI exceeding 0.79 were deemed appropriate.24 The average scale-level CVI (S-CVI/Ave), defined as the average of the I-CVI scores for all items on the scale, is calculated by summing all I-CVIs and dividing by the total number of items.25 An acceptability threshold for the S-CVI/Ave is set at 0.90.26 Content validity evaluation incorporated the Kendall coefficient of concordance and its associated p-value. A Kendall W value within the range of 0.40 to 0.75 signifies fair to good agreement, while values at or above 0.75 denote excellent agreement.27

Internal Consistency

Internal consistency within each section was assessed using Cronbach’s alpha analysis, with a value of 0.70 or higher indicating acceptable internal consistency.28

Test-Retest Reliability

The stability of the scale over time, known as retest reliability, was evaluated in this study using the Spearman correlation coefficient. A cohort of forty students was randomly chosen for retesting after a two-week interval. Categorized by Spearman correlation coefficient (r), correlations are classified as strong (r≥0.5), moderate (0.35≤r<0.5), or weak (0.2≤r<0.35).29

Construct Validity

Construct validity assessment is crucial in determining the effectiveness of a test in measuring its intended concept. To establish the validity of the questionnaire, we conducted an exploratory factor analysis (EFA) on the translated and culturally adapted questionnaire using the complete sample. Similarly, as the method adopted in the English cohort, separate EFAs were conducted for the EBP degree section, EBP frequency section, and EIP degree section due to the inability to combine different Likert response options. The EIP frequency section contained only 3 items; thus, the EFA was not performed. The Kaiser–Meyer–Olkin (KMO) measure and Bartlett’s test were utilized to assess data adequacy, with a KMO measure above 0.70 indicating reliable factor analysis.30

Based on the original scale design initially intended to assess knowledge, attitudes, understanding, and behaviors related to EBP and EIP, we opted to maintain a fixed number of factors: four factors for EBP degree, two factors for EBP frequency and three factors for EIP degree.19 The EFA was carried out using principal component analysis, adhering to specific criteria such as factor loadings exceeding 0.40 without cross-loading between factors.

Statistical Analysis

IBM SPSS Statistics 21.0 was used for all data analysis including validity and reliability assessments as mentioned above. Descriptive statistics were used to present respondents’ characteristics. Categorical data were analyzed using frequency and percentage. Normally distributed data were presented as means±standard deviations (SDs), and non-normally distributed data were presented as median (percentile 25, percentile 75).

Results

Demographic Characteristics

A total of 279 Chinese students, with a median age of 21, showed a younger age profile compared to English respondents. Our study, consistent with the original questionnaire developers’ findings, also identified a gender imbalance among respondents, with 60.93% being female and 39.07% being male. This distribution mirrored the imbalance of sex distribution of students in our current research setting. A comparison of respondent characteristics between the Chinese and English cohorts is presented in Table 1.

Table 1 Comparison of Characteristics Between Respondents in the Chinese and English Cohorts

Translation and Cultural Adaptation

In the translation phase, the two translators initially faced challenges in translating the concept of “evidence-informed practice” into Chinese due to the absence of an established or universally recognized translation for EIP terminology. After full discussion, the Expert Committee reached an agreement to translate it into Chinese as “知证指导实践”. During the back translation phase, one of the translators used “clinical environment” and “major” instead of “clinical setting” and “course of study”, with these discrepancies being resolved following a comprehensive discussion by the Expert Committee members. Additionally, minor adjustments to wording were implemented in the initial two phases.

After consulting with the original developers via email, we revised item 4 to match the academic levels of the respondents, covering both undergraduate and postgraduate students. We included response options such as “First year of postgraduate”, “Second year of postgraduate”, and “Third year of postgraduate”. Furthermore, we modified the terminology in the questionnaire from “patient care” to “rehabilitation/therapy” for respondents in physiotherapy and related fields. Additionally, we exchanged the Cumulative Index to Nursing and Allied Health Literature (CINAHL), a database frequently employed in nursing education, with the more familiar database PubMed for students in physiotherapy and exercise rehabilitation.

In the pursuit of establishing face validity, feedback from 10 respondents in the pilot study was incorporated, leading to adjustments in the word order to enhance clarity. Specific items in the original English version were identified as inverse meaning items, such as items 14, 15, and 16, with occasional presentation of response options in a reversed sequence. Despite several complaints from respondents about the complexity of these items, the research team opted to maintain the original scale’s structure and preserve its design. Additionally, some respondents noted their lack of familiarity with the EIP concept, emphasizing the importance of providing detailed explanatory instructions prior to administering the questionnaire.

Psychometric Properties the Chinese Version of Evidence-Based Practice and Evidence-Informed Practice Questionnaire

Floor/Ceiling Effects

There were no significant floor or ceiling effects noted, as no respondents attained minimum or maximum scores.

Content Validity

Five invited experts consented to participate in the study and submitted the completed form. Item 32 was amended to “Formally shared and discussed your research findings with classmates (eg, through journal clubs, class presentations) and/or other healthcare organizations (eg, through publications in healthcare journals, conference presentations)” to correspond with the student respondents in the study, and this item was retained following deliberation by the Expert Committee members. Two experts rated item 14 as 2 (somewhat relevant), one as 3 (relevant), and two as 4 (very relevant). With an I-CVI of 0.60<0.80, item 14 was excluded. The I-CVIs for all other items, ranging from 0.80 to 1.00, indicated no redundancy or need for modification based on the specialists’ qualitative content validity assessment.

The S-CVI/Ave for the entire instrument was 0.91, indicating high content validity. The Kendall W score of 0.37 (p<0.001) indicated considerable consensus among the experts. Nonetheless, the Kendall W score was below 0.40, potentially attributable to the varied disciplines and research fields of the five experts. The details of the calculation are explained in Table 2.

Table 2 Results of the Content Validity (Five Experts)

Internal Consistency and Test–Retest Reliability

Reliability analysis was performed following the content validity. Internal consistency analyses yielded Cronbach’s alpha values of 0.78, 0.86, 0.86, and 0.89 for the EBP degree, EBP frequency, EIP degree, and EIP frequency sections, demonstrating satisfactory internal consistency.

Spearman correlation coefficients for EBP items with degree options ranged from 0.33 to 0.65 (p<0.05) across 12 items, with item 16 showing a value of 0.29 (p=0.065). For EBP items with frequency options, the Spearman correlation coefficient values ranged from 0.38 to 0.80 (p<0.05) across 7 items. EIP items with degree options exhibited correlation coefficient values ranging from 0.34 to 0.63 (p<0.05) across 12 items, while EIP items with frequency options showed values ranging from 0.46 to 0.60 (p<0.01) across 3 items. The Spearman correlation coefficients for the subtotal scores of the EBP and EIP questionnaires were 0.51 (p<0.001), 0.61 (p<0.001), 0.62 (p<0.001), and 0.66 (p<0.001), respectively. Results of internal consistency and test–retest reliability are shown in Table 3.

Table 3 Items Scoring, Internal Consistency, and Test–Retest Reliability Measurements of the Chinese Version of Evidence-Based Practice and Evidence-Informed Practice Questionnaire

Construct Validity

The Bartlett Test of Sphericity in the EBP degree section yielded a significant result (χ2=1048.18, df=78, p<0.001), with a KMO measure indicating sample adequacy of 0.81. An EFA of 13 items identified four factors: Factor 1 (items 20, 18, 22, 19, 24, and 23), Factor 2 (items 33, 34, 16, and 23), Factor 3 (items 15 and 21), and Factor 4 (items 17 and 13) accounted for 25.54%, 16.61%, 10.72%, and 9.33% of the total variance, respectively. Notably, item 23 exhibited cross-loading between factors.

In the EBP frequency section, significant results were observed from the Bartlett Test of Sphericity (χ2=902.57, df=21, p<0.001), with a KMO value of 0.87. Among the 7 items, six (29, 30, 28, 31, 32, 27) loaded on the first factor, explaining 50.95% of the total item variance. One item (item 25) loaded on the second factor, explaining 18.40% of the variance. Together, these two factors accounted for 69.35% of the total item variance, with the limited number of items likely contributing to the second factor containing only one item.

With a KMO value of 0.90, the EIP with degree options demonstrated suitability for EFA, as confirmed by a statistically significant Bartlett Test of Sphericity (χ2=1754.63, df=66, p<0.001). The first factor, comprising seven items (50, 47, 49, 46, 48, 44, 43), accounted for 32.11% of the variance. The second and third factors explained 23.87% and 11.29% of the variance. Notably, items 43, 44, and 42 displayed cross-loadings across factors. Due to the limited number of items in the EIP frequency section (three items), EFA was not conducted.

Following the removal of four cross-loading items, clearer factor loadings were observed for both the EBP and EIP degrees in the EFA results. Table 4 presents both the initial findings and the modified EFA results after eliminating items with cross-loadings.

Table 4 Exploratory Factor Analysis (EFA) Results of the Chinese Version of the Evidence-Based Practice and Evidence-Informed Practice Questionnaire

Discussion

The current study sought to translate and culturally adapt the Evidence-Based Practice and Evidence-Informed Practice Questionnaire into Chinese and assess its psychometric properties among undergraduate and postgraduate students in physiotherapy and exercise rehabilitation. Our findings confirm that the Chinese version demonstrates reliability and validity as a practical tool for evaluating EBP and/or EIP knowledge, attitudes, understanding, and behaviors.

For decades, policy-making bodies and healthcare professions have stressed the importance of EBP and EIP in enhancing healthcare quality, reliability, and patient outcomes. With the growing focus on developing EBP and EIP educational programs for physical therapy and allied health professions globally, a validated and objective tool is crucial for the advancement and integration of EIP.12 This distinguishes the current study from prior research on scales that solely assess EBP attitudes and behaviors.31,32 The Evidence-Based Practice and Evidence-Informed Practice Questionnaire appears to be an innovative and thorough tool for examining both EBP and EIP components.

A comprehensive literature search was conducted prior to this study, and we found that the Evidence-Based Practice and Evidence-Informed Practice Questionnaire has undergone validation among English social work students. However, it has not been translated or culturally adapted into other languages for publication. This limitation partly impedes the comparison of psychometric test results in the current study with respondents from various educational backgrounds (nursing, physiotherapy, occupational therapy, and speech and language therapy) in different countries.

Beijing Sport University provides undergraduate programs in physiotherapy and exercise rehabilitation, along with postgraduate programs in rehabilitation medicine and physiotherapy, and sports medicine. In addition to elective EBP courses, students will also be offered compulsory courses in Neurological Lesion Rehabilitation, Musculoskeletal Rehabilitation, and Cardiopulmonary and Chronic Disease Rehabilitation, all structured in the EBP/EIP framework. Given the constrained sample size of students in school, we included both undergraduate and postgraduate individuals to fill out the questionnaire following consultation with the original scale developer. The developers emphasized the importance of respondents in the validation study of the questionnaire having a basic understanding of EBP and EIP. A comparison of respondent characteristics between the Chinese and English cohorts is outlined in Table 1. Overall, Chinese student respondents were notably younger than their British counterparts, with a median age of 21.00 years compared to 31.50 years. Extended education and professional experience seem to contribute to a more profound grasp of EBP and EIP among the respondents. Additionally, Chinese respondents tended to participate in clinical practice in the later phases of their undergraduate and postgraduate studies, resulting in a limited sample size of first- and second-year undergraduate students recruited. Nonetheless, respondents from both nations exhibited similar levels of understanding of EBP and EIP concepts, as indicated in items 5–12 and items 35–38. Respondents’ understanding of the concept of EIP was inferior to that of EBP. This suggests that the concept of EIP should be strengthened in future educational programs.

During validation, content validity was assessed using I-CVI and S-CVI/Ave. The I-CVIs for the majority of items ranged from 0.80 to 1.00, indicating good content validity for the Chinese version, except for item 14. The original scale also excluded item 14 from subsequent construct validity analyses, and our study similarly omitted this item following expert consensus.

In the reliability analysis, the Chinese version of the Evidence-Based Practice and Evidence-Informed Practice Questionnaire demonstrated good internal consistency, as indicated by Cronbach’s alpha values of 0.78, 0.86, 0.86, and 0.89 for the EBP degree, EBP frequency, EIP degree, and EIP frequency sections, respectively. Due to different items being loaded in different domains, the internal consistency was assessed based on the construct of the Chinese and English versions, respectively, thus precluding comparisons of Cronbach’s alpha values. The results of the retest reliability two weeks apart were significantly correlated, but the Spearman correlation coefficients ranged from 0.34 to 0.80. With the increase in complexity of items and response options, retest outcomes may be influenced by certain items and response options being presented in the reverse sequence, as mentioned earlier.

The execution of EFA was attempted, but the outcomes were deemed unsatisfactory due to the complexity of the Evidence-Based Practice and Evidence-Informed Practice Questionnaire instrument. The original questionnaire items were designed to assess five distinct domains related to EBP (knowledge, attitudes, understanding of EBP concepts, behavior, and self-perceived application and use of EBP) and four domains related to EIP (knowledge, attitudes, understanding of EIP concepts, and behavior). Both EBP and EIP encompass specific structural components and response options (degree/frequency) within their respective conceptual dimensions. The original developers conducted principal component analysis to simplify dimensions and validate constructs. However, previous studies did not include EFA. Our study’s EFA results confirmed a similar construct, but it is important to note that direct comparison between EFA and principal component analysis results was not feasible.33 During the EFA, it was observed that the component loadings in particular domains were consistent with the original authors’ discoveries. Notably, some factors/domains comprised only 1–2 items, such as a single item loading in “attitude towards EIP” within the EIP degree section. This concurrence could be attributed to the restricted number of items in these domains, causing us to hesitate in eliminating items with cross-loadings directly. As a result, reliability tests were not carried out on the version where items with cross-loadings were removed. Item 42 “Professional accountability is an essential part of a health professional’s roles and responsibilities” is an example where respondents’ understanding of EIP was unexpectedly intertwined with other factors. Similarly, certain items did not align with the expected factors. For instance, item 39 “Evidence-informed practice takes into account the complexities of my day-to-day work” was initially loaded under the “attitude” factor in the English version but was found to reflect participants’ understanding of EIP in the Chinese version. For item 33, “To effectively apply evidence into practice, I am not required to critically appraise relevant research papers” was originally loaded under “Self-perceived application and use” in the English version but was also linked to respondents’ “understanding” of EBP among Chinese respondents. Confirmatory factor analysis was not feasible in our study due to limitations in sample size, as it requires separate samples/data sets for EFA and confirmatory analyses.34 Future research endeavors with a larger sample size will address this aspect of the findings.

The current study has several limitations worth addressing. First, despite the respondents being social work students in validating the original scale, the recruitment of similar subjects was hindered by the absence of this specialization at Beijing Sport University. Second, respondents may encounter challenges in comprehending essential terms associated with EBP/EIP, owing to variations in courses across different educational levels and specialties. Third, the findings of the present study on reliability and validity should be cautiously generalized to other health disciplines. As emphasized by the original developer, the validated questionnaire can be used across different allied health disciplines and is effective in assessing students’ competencies in applying evidence to clinical practice after completion of courses on EBP and EIP. In the future, more longitudinal research could consider applying this tool to other healthcare domains.

Conclusions

Following cultural adaptation, the psychometric assessment of the Chinese version of the Evidence-based Practice and Evidence-informed Practice Questionnaire exhibited good reliability and validity. This tool is expected to be effective in evaluating the knowledge, attitudes, understanding, and behavior of physiotherapy and exercise rehabilitation students regarding EBP and EIP.

Data Sharing Statement

The data cannot be made public in order to protect the privacy of study respondents. Information is available upon request by emailing [email protected] and [email protected] to Jinghua Qian and Dongsen Liu, School of Sports Medicine and Rehabilitation, Beijing Sport University, Beijing, China. De-identified data will be made available to qualified researchers upon reasonable request.

Consent for Publication

All authors consented to publication of the final version in this journal.

Acknowledgments

The authors express their gratitude to Guanlan Kang from the School of Psychology at Beijing Sport University for offering valuable suggestions during the validity study.

Author Contributions

All authors made a significant contribution to the work reported, whether that is in the conception, study design, execution, acquisition of data, analysis and interpretation, or in all these areas; took part in drafting, revising or critically reviewing the article; and agree to be accountable for all aspects of the work.

Funding

This study is supported by the “Fundamental Research Funds for the Central Universities (the Laboratory of Exercises Rehabilitation Science), 2023KFZX005”.

Disclosure

The authors have no conflicts of interest regarding the authorship or publication of this article.

References

1. Scurlock-Evans L, Upton P, Upton D. Evidence-based practice in physiotherapy: a systematic review of barriers, enablers and interventions. Physiotherapy. 2014;100(3):208–219. doi:10.1016/j.physio.2014.03.001

2. Morley C, Jose K, Hall SE, et al. Evidence-informed, experience-based co-design: a novel framework integrating research evidence and lived experience in priority-setting and co-design of health services. BMJ Open. 2024;14(8):e084620. doi:10.1136/bmjopen-2024-084620

3. Nevo I, Slonim-Nevo V. The myth of evidence-based practice: towards evidence-informed practice. Br J Soc Work. 2011;41(6):1176–1197. doi:10.1093/bjsw/bcq149

4. Kumah EA, McSherry R, Bettany-Saltikov J, van Schaik P. Evidence-informed practice: simplifying and applying the concept for nursing students and academics. Br J Nurs. 2022;31(6):322–330. doi:10.12968/bjon.2022.31.6.322

5. Kumah EA, McSherry R, Bettany-Saltikov J, et al. PROTOCOL: evidence-informed practice versus evidence-based practice educational interventions for improving knowledge, attitudes, understanding, and behavior toward the application of evidence into practice: a comprehensive systematic review of undergraduate students. Campbell Syst Rev. 2019;15(1–2):e1015. doi:10.1002/cl2.1015

6. Mitchell R, Braithwaite J. Evidence-informed health care policy and practice: using record linkage to uncover new knowledge. J Health Serv Res Policy. 2021;26(1):62–67. doi:10.1177/1355819620919793

7. Kredo T, Young T, Wiysonge CS, McCaul M, Volmink J. The Cochrane Corner in the SAMJ: summaries of Cochrane systematic reviews for evidence-informed practice. S Afr Med J. 2015;105(7):548. doi:10.7196/SAMJnew.8035

8. Epstein I. Promoting harmony where there is commonly conflict: evidence-informed practice as an integrative strategy. Soc Work Health Care. 2009;48(3):216–231. doi:10.1080/00981380802589845

9. Snelgrove‐Clarke EE, Rush J. Maternal and women’s health: evidence‐informed practice on a global scale. Worldviews Evid Based Nurs. 2011;8(3):125–127. doi:10.1111/j.1741-6787.2011.00228.x

10. Engle RL, Mohr DC, Holmes SK, et al. Evidence-based practice and patient-centered care: doing both well. Health Care Manage Rev. 2021;46(3):174–184. doi:10.1097/HMR.0000000000000254

11. Yetis M, Ceylan I, Canli M, et al. Validity and reliability of Turkish version of the Munich Wrist Questionnaire in patients with wrist problems. Eval Health Prof. 2024;47(1):105–110. doi:10.1177/01632787231172276

12. Tractenberg RE, Gordon M. Supporting evidence-informed teaching in biomedical and health professions education through knowledge translation: an interdisciplinary literature review. Teach Learn Med. 2017;29(3):268–279. doi:10.1080/10401334.2017.1287572

13. Ooi SL, Smith L, Pak SC. Evidence-informed massage therapy - an Australian practitioner perspective. Complement Ther Clin Pract. 2018;31:325–331. doi:10.1016/j.ctcp.2018.04.004

14. Albert D, Fortin R, Herrera C, et al. Strengthening chronic disease prevention programming: the Toward Evidence-Informed Practice (TEIP) program evidence tool. Prev Chronic Dis. 2013;10:E87. doi:10.5888/pcd10.120107

15. Dean E. Physical therapy in the 21st century (part I): toward practice informed by epidemiology and the crisis of lifestyle conditions. Physiother Theory Pract. 2009;25(5–6):330–353. doi:10.1080/09593980802668027

16. Classen S, Alvarez L. Editorial: evidence-informed reviews-moving occupational therapy practice and science forward. OTJR. 2015;35(4):199–203. doi:10.1177/1539449215607431

17. Romney W, Salbach NM, Perry SB, Deutsch JE. Evidence-based practice confidence and behavior throughout the curriculum of four physical therapy education programs: a longitudinal study. BMC Med Educ. 2023;23(1):839. doi:10.1186/s12909-023-04821-0

18. Jessani NS, Hendricks L, Nicol L, Young T. University curricula in evidence-informed decision making and knowledge translation: integrating best practice, innovation, and experience for effective teaching and learning. Front Public Health. 2019;7. doi:10.3389/fpubh.2019.00313

19. Kumah EA, Bettany-Saltikov J, van Schaik P, McSherry R, Boadu P. Development and validation of a questionnaire to assess evidence-based practice and evidence-informed practice knowledge, attitudes, understanding and behavior. Teach Learn Nurs. 2023;18(4):e220–e228. doi:10.1016/j.teln.2023.07.006

20. Beaton DE, Bombardier C, Guillemin F, Ferraz MB. Guidelines for the process of cross-cultural adaptation of self-report measures. Spine. 2000;25(24):3186–3191. doi:10.1097/00007632-200012150-00014

21. Sousa VD, Rojjanasrirat W. Translation, adaptation and validation of instruments or scales for use in cross-cultural health care research: a clear and user-friendly guideline. J Eval Clin Pract. 2011;17(2):268–274. doi:10.1111/j.1365-2753.2010.01434.x

22. Gorsuch RL. Factor analysis (2nd ed.). 1983.

23. Terwee CB, Bot SD, de Boer MR, et al. Quality criteria were proposed for measurement properties of health status questionnaires. J Clin Epidemiol. 2007;60(1):34–42. doi:10.1016/j.jclinepi.2006.03.012

24. Abdollahpour I, Nedjat S, Noroozian M, Majdzadeh R. Performing content validation process in development of questionnaires. Iran J Epidemiol. 2011;6:66–74.

25. Zamanzadeh V, Ghahramanian A, Rassouli M, Abbaszadeh A, Alavi-Majd H, Nikanfar AR. Design and implementation content validity study: development of an instrument for measuring patient-centered communication. J Caring Sci. 2015;4(2):165–178. doi:10.15171/jcs.2015.017

26. Shi J, Mo X, Sun Z. Content validity index in scale development. Zhong Nan Da Xue Xue Bao Yi Xue Ban. 2012;37(2):152–155. doi:10.3969/j.issn.1672-7347.2012.02.007

27. Landis JR, Koch GG. The measurement of observer agreement for categorical data. Biometrics. 1977;33(1):159–174. doi:10.2307/2529310

28. Cicchetti DV. Guidelines, criteria, and rules of thumb for evaluating normed and standardized assessment instruments in psychology. Psychological Assessment. 1994;6(4):284–290. doi:10.1037/1040-3590.6.4.284

29. Xie S, Wu J, Chen P, et al. The generic version of China Health Related Outcomes Measures (CHROME-G): psychometric testing and comparative performance with the EQ-5D-5L and SF-6Dv2 among the Chinese general population. BMC Public Health. 2024;24(1):3485. doi:10.1186/s12889-024-20999-4

30. Kaiser HF. An index of factorial simplicity. Psychometrika. 1974;39(1):31–36. doi:10.1007/BF02291575

31. Jia Y, Zhuang X, Zhang Y, et al. Adaptation and validation of the Evidence-based Practice Profile Questionnaire (EBP2Q) for clinical postgraduates in a Chinese context. BMC Med Educ. 2023;23(1):588. doi:10.1186/s12909-023-04594-6

32. Fernández-Domínguez JC, de Pedro-Gómez JE, Morales-Asencio JM, Bennasar-Veny M, Sastre-Fullana P, Sesé-Abad A. Health Sciences-Evidence Based Practice questionnaire (HS-EBP) for measuring transprofessional evidence-based practice: creation, development and psychometric validation. PLoS One. 2017;12(5):e0177172. doi:10.1371/journal.pone.0177172

33. Alavi M, Visentin DC, Thapa DK, Hunt GE, Watson R, Cleary M. Exploratory factor analysis and principal component analysis in clinical studies: which one should you use? J Adv Nurs. 2020;76(8):1886–1889. doi:10.1111/jan.14377

34. Santor DA, Haggerty JL, Lévesque JF, et al. An overview of confirmatory factor analysis and item response analysis applied to instruments to evaluate primary healthcare. Healthc Policy. 2011;7(Spec Issue):79–92.

Creative Commons License © 2025 The Author(s). This work is published and licensed by Dove Medical Press Limited. The full terms of this license are available at https://www.dovepress.com/terms.php and incorporate the Creative Commons Attribution - Non Commercial (unported, 4.0) License. By accessing the work you hereby accept the Terms. Non-commercial uses of the work are permitted without any further permission from Dove Medical Press Limited, provided the work is properly attributed. For permission for commercial use of this work, please see paragraphs 4.2 and 5 of our Terms.