Skip to main content

Development and validation of a tool to assess researchers’ knowledge of human subjects’ rights and their attitudes toward research ethics education in Saudi Arabia

Abstract

Background

Researchers must adhere to ethical and scientific standards in their research involving human subjects; therefore, their knowledge of human subjects’ rights is essential. A tool to measure the extent of this knowledge is necessary to ensure that studies with participants are conducted ethically and to enhance research integrity. Currently, no validated instrument is available for such an assessment. Therefore, the primary purpose of this study is to develop a reliable and valid instrument to assess researchers’ knowledge of human subjects’ rights in clinical settings, as well as a reliable and valid measure of their attitudes toward clinical research ethics education in Saudi Arabia.

Methods

The current study involves the development of a questionnaire about the rights of human subjects in research and the researchers’ attitudes toward research ethics education. The content was developed based on an extensive review of research ethics guidelines. A panel of experts tested the questionnaire for face validity (n = 5) and content validity (n = 8). The reliability of the questionnaire was established by a split-half reliability coefficient and item analysis among a sample (n = 301) of clinical researchers.

Results

Face validity demonstrated that the questionnaire was quick to complete and easy to answer. The global content validity indices (S-CVIs) were greater than 0.78 for all questionnaire sections; the split-half reliability coefficient was 0.755 for knowledge items; Cronbach’s alpha was 0.77 for researchers’ attitudes, showing good internal consistency. The difficulty index ranged from 12.0% to 98.7% for all knowledge items. Most questions were at an acceptable level of reliability and discrimination criteria. The final version of the questionnaire contained 89 items, distributed as 15 questions on demographic and professional characteristics, 64 questions items on knowledge, and 10 items on attitudes.

Conclusions

The questionnaire is a valid and reliable tool to assess biomedical researchers’ knowledge of human subjects’ rights and their attitudes toward research ethics education. This instrument could help address the gap in researchers’ knowledge of the rights and facilitate the development of educational intervention programs to set appropriate learning objectives.

Peer Review reports

Background

Clinical research is the gold standard for developing new treatment regimens [1]. Researchers must adhere to ethical and scientific standards when they undertake research involving human subjects [2]. One key objective behind applying these standards is to ensure that any conclusion on treatment effectiveness, mechanisms of disease, normal physiology, and learning and behaviors can be verified through them [3]. Prior to the establishment of such standards, unjustified studies which placed human subjects in harm’s way had taken place without consideration to their rights. This can be seen in past experiments conducted in Nazi Germany during World War II which were the forms of human cruelty, or the Tuskegee Syphilis Study conducted in the United States of America [4].

The principles that form guidelines for ethical research and Good Clinical Practice (GCP) are autonomy, beneficence, non-maleficence, and justice [5]. The well-known Nuremberg Code, issued in 1947, was the beginning of establishing national ethical codes governing medical research [6], followed by the Declaration of Helsinki (1964) which focused on protecting human subjects who participate in medical research [7]. Further, the International Conference on Harmonization Good Clinical Practice (ICH-GCP) (1998) requires that all Institutional Review Boards (IRBs) or Ethics Committees (ECs) are responsible for reviewing research protocols involving human subjects and for ensuring the adequacy of their protection [8].

The Council of Ministers in the Kingdom of Saudi Arabia (KSA) passed a law entitled The Law of Ethics of Research on Living Creatures (the Law) on August 24th, 2010 [9]. Following the passing of the Law, the National Committee of Bioethics (NCBE) issued the Implementing Regulations of the Law of Ethics of Research on Living Creatures (the Implementing Regulations) on December 25th, 2011 [9]. The Saudi system considers Islamic Sharia (Law) in addition to international research ethics guidelines [10]. Therefore, this law serves all Islamic countries in the region with similar values and social structure as those of KSA [11]. Moreover, the Saudi Food and Drug Authority requires clinical trials to follow the ethical principles stated in the ICH-GCP, and World Medical Association—Declaration of Helsinki [12].

Researchers’ knowledge of subjects’ rights is essential for achieving the highest ethical standards [13]. However, despite stringent regulations, studies suggest concerns about the inclusion of informed consent requirements and IRB review may not assure adequate protection for participants [8]. Additionally, studies conducted internationally have indicated insufficient knowledge among faculty members and physicians regarding research ethics [14, 15]. Meanwhile, a multicenter study conducted in KSA and Egypt reported sub-optimal knowledge and attitude related to research ethics among university dental faculties and recommended further studies be carried out to examine the generalizability of their results to other institutions [16].

This indicates a critical need for examining researchers’ knowledge to ensure they are aware of considerations and maintaining human rights throughout any study using validated instruments. Thus, the current study aims to develop a reliable and valid questionnaire on researchers’ knowledge of human subjects’ rights in clinical settings and a reliable and valid measure of their attitudes toward education on research ethics. The current study utilized a sample of medical researchers employed at King Fahad Medical City (KFMC) in KSA. Furthermore, the data resulting from this study may help policymakers develop plans for the effective implementation of ethics committee functions, establish educational intervention programs in clinical research ethics, and subsequently address potential knowledge gaps in these areas.

Methods

Study design and setting

The current study involves the development and validation of the questionnaire and then a cross-sectional study conducted at KFMC, one of the main health research centers in Riyadh, Saudi Arabia.

Study subjects

Participants are KFMC physicians, nurses, pharmacists, technicians, allied health professionals, and medical researchers (principal investigators, co-investigators, and research coordinators) who have conducted at least one research study, were listed in the KFMC IRB database from 2007 to 2021, and were willing to participate in the study. Furthermore, health care workers who have participated in any of the conducted research were polled to gain insight into the ethical conduct of the research.

Data collection and management

An invitation letter was sent through email to the 550 active researchers in the IRB records at KFMC who met the criteria to participate in the study. Attached to the invitation letter was the Google Form link to the questionnaire that contained the cover letter describing the study and requiring the researchers’ consent for voluntary participation. When our sample size of 301 participants signed up through the Google Form link, no more could sign up. The questionnaire was in the English language, which is the main working language for the staff (researchers) at KFMC. Only the research team had access to the data, and anonymity and confidentiality were always maintained.

Questionnaire development

Five important steps were taken in the development of the questionnaire used to evaluate and assess researchers’ knowledge of the rights of human subjects in research. We searched the literature extensively for a questionnaire. Figure 1 presents a flow chart outlining the development and validity of the questionnaire.

Fig. 1
figure 1

Flow chart outlining the development and validity of the questionnaire

The questionnaire was developed from a previous study [16] and obtained from issues addressed in the ethical guidelines in the ICH-GCP and Code of Federal Regulations [17, 18]. Five ethics experts, including the principal investigator, belonging to different nationalities (USA, Switzerland, Germany, and Saudi Arabia) were involved in developing the first draft of the questionnaire.

The questionnaire was divided into three sections. The first section covered respondents’ demographic information and professional characteristics, including their age, gender, nationality, education level, whether they were medical education graduates, occupation, years of research experience, and the number of research publications in medical journals. The second section explored respondents’ knowledge of subjects’ rights in clinical research. The third section explored respondents’ attitudes toward education on research ethics. To the questions in the first section a single response, multiple responses, or “yes,” “no,” or “not sure” responses were required; answers in the second section were assessed with “correct,” “not correct,” or “I don’t know” choices. Finally, answers in the third section were assessed on a 5-point Likert scale (strongly agree, agree, neutral, disagree, and strongly disagree).

Face validity

Face validity is used to assess readability, feasibility, and consistency of the style and formatting, and clarity of the language used in the questionnaire’s appearance [19, 20]. Eight researchers reviewed the first draft of the questionnaire (see Additional file 1).

To determine the face validity of the questionnaire, we created an evaluation form that helps participants assess various items. It also helped us identify areas for improvement and potential additions to the next version. The questions were evaluated for clarity, style, ease of understanding, and layout.

Content validity

The questionnaire, after face validation, was sent to a panel of eight content experts in the field of research ethics to review the instrument for content validity. These experts included professors and persons with PhDs in the field of bioethics with vast experience in human research and belonged to different nationalities (USA, Switzerland, Germany, Saudi Arabia). They reviewed the questionnaire for readability, clarity, and comprehensiveness and reached a level of agreement on which questions should be retained in the final questionnaire.

For content validity, the panel reviewed the relevance of each question on a 4-point Likert scale: 1 = not relevant, 2 = somewhat relevant, 3 = relevant, and 4 = very relevant. Then for each question, the number of experts giving a 3 or 4 score is counted (3,4 – relevant; 1,2 – non-relevant). The content validity index (CVI) was calculated by averaging the cumulative level of agreement among the experts [21, 22]. To assess content validity, we used the content validity of individuals (I-CVI) and the overall scale (S-CVI). The scale-level content validity indices (S-CVI) were calculated from the item-level content validity index (I-CVI) [20, 21, 23]. Previous research suggested that an I-CVI of ≥ 0.78 obtained from 6 to10 raters would be acceptable for inclusion in the questionnaire [23].

Pilot study and questionnaire revision

An initial form was pilot tested on a convenience sample of 30 consenting doctors, pharmacists, nurses, and other researchers at KFMC. The investigator asked each respondent to keep track of the amount of time it took to complete the questionnaire which took approximately 20–25 min to complete, and they provided feedback on any misleading or confusing question items.

Further, the investigator wanted to ensure that the questions were easy for all respondents to understand. The participants’ suggestions included reformulating and rewording some items and removing potentially repetitive ones. Furthermore, five experts reviewed the pilot study results— two from the Clinical Trials Center at University of Zurich and three having expertise in research ethics. All reported feedback was sent to the principal investigator for necessary actions. Questions were removed or modified based on the suggestions.

Questionnaire reliability

Because the investigator could not repeat the test, the current study evaluated the internal consistency of knowledge items in the questionnaire using the split-half coefficient. When re-testing is not possible, the split-half coefficient is an alternative technique for the assessment of questionnaire reliability through the division of questions into two halves (e.g., odd versus even). The values of the split-half coefficient range from 0.0 to 1.0—higher values indicate high reliability of the test [24]. Conversely, Cronbach’s alpha was reported to assess the reliability of the section that contains researchers’ attitudes toward education about research ethics because it is measured using a 5-point Likert scale [25].

To do so, we selected a random sample from the population of registered researchers in the IRB records at KFMC, which had a total population size of 550 active researchers. Therefore, to establish a power of 80% at the 99% confidence interval with a 5% margin of error, the required sample size was 301 participants. Therefore, we conducted a reliability analysis using a sample of 301 researchers drawn randomly from the IRB records at KFMC. The questionnaire was distributed to participants using a Google Form link, along with a cover letter explaining the purpose of the study and the confidentiality of their personal data.

Item analysis

The item analysis is used to assess the effectiveness of each question in a certain test. It can be performed using item-difficulty and item-discrimination indices [26, 27]. The difficulty index measures the percentage of participants who correctly answered each item. The values in the item-difficulty index ranged from 0.0 to 1.0 (0.0 to 100%) — its higher values indicate the greater difficulty of the question and lower values indicate the lesser difficulty. The ideal item-difficulty index is 85% for dichotomous questions (correct/ incorrect) and 77% for three multiple choice questions [26]. The questions that showed item difficulty ≤ 30.0% indicated great difficulty (difficult); between 30 and 80% showed medium or moderate difficulty; and ≥ 80% showed less difficulty (easy) [26].

Meanwhile, the item-discrimination index for each item distinguishes between participants who do well and those who do poorly in a test [27]. It can be measured by dividing the sample into high- and low- skill groups based on the total test score, and then the discrimination index can be calculated by subtracting the percentage of those who correctly answered the item in the low-skill group (i.e., bottom 25%) from the percentage of those who correctly answered the item in the high-skill group (i.e., top 25%). Its values range from -1.0 to 1.0 — higher positive values indicate that the question discriminates between the two groups, while lower values indicate poorer discrimination. Negative values near -1.0 indicate that participants in the low-skill group correctly answered the question; however, they incorrectly did it compared to those in the high-skill group [28]. Another way to calculate item discrimination is the point-biserial correlation, which measures the correlation between each question and the total test score (i.e., item-to-total correlations). Table 1 shows the guidelines to determine whether the question should be rejected or improved [27]. On the other hand, the question that has a point-biserial correlation < 0.20 is to be removed or revised [28]. The data collected during reliability analysis was also used in the item analysis.

Table 1 Discrimination guidelines to determine whether the question should be rejected or improved

The final version of questionnaire

Based on the validity and reliability testing of the previous steps specified above, we have produced the final questionnaire.

Ethical considerations

This study was conducted following the ethical requirements of KFMC. Ethical approval was obtained from the IRB at KFMC (IRB log No. 19–240). All methods were performed in accordance with the relevant guidelines and regulations or declaration of Helsinki. The potential participants’ contact details were obtained from the IRB at KFMC subsequent to ethical approval. The participants were informed that privacy and anonymity would be maintained.

Statistical analysis

Content validity was assessed by calculating I-CVI and S-CVI for each part in the questionnaire. The split-half reliability coefficient was calculated to assess the internal consistency reliability of the items used to assess researchers’ knowledge. Cronbach’s alpha was calculated to assess the internal consistency and reliability of each item used to assess researchers’ attitudes toward research ethics and education with the other items. Item analysis was conducted using the difficulty index and the discrimination index (discrimination and point-biserial correlations). Descriptive statistics of the characteristics related to demographic and research background were presented in terms of mean (standard deviation; SD), median (interquartile range; IQR), or counts and percentages as appropriate. We also used independent samples t-tests and one-way ANOVA with Tukey–Kramer post hoc analysis to compare average total knowledge scores by demographics and between trained and untrained participants. Statistical significance was sought at values < 0.05. All analyses were done using a standard software package (Stata, version 15.0; StataCorp).

Results

Questionnaire design

The first draft of the questionnaire about researchers’ knowledge on human subjects’ rights and attitudes toward research ethics education consisted of three parts and a total of 90 questions (see Additional file 1).

  • Part I- The demographic and professional characteristics, which contained 15 questions.

  • Part II- The knowledge section that contained six sections and a total of 65 questions is detailed below:

    • Section 1: The basic and additional elements of informed consent: 17 Questions

    • Section 2: Institutional Review Board (IRB) or Research Ethics Committee (REC): 10 Questions

    • Section 3: Safety Reporting Issues in Clinical Research: 6 Questions

    • Section 4: Researchers’ responsibilities in clinical research: 19 Questions

    • Section 5: Technical Aspects of the Informed Consent Process: 10 Questions

    • Section 6: Clinical research scenario on confidentiality: 3 Questions

  • Part III- The researchers’ attitude toward research ethics education, which contained 10 questions.

Face validity

All respondents reviewed each of the 65 questions on researchers’ knowledge and the 10 questions on researchers’ attitudes. The respondents indicated that they understood the questions. Eight respondents found these questions easy to answer and four of these eight respondents suggested that the appearance and layout would be acceptable to the target audience. Most respondents suggested splitting question 11, in Part II, Section 1 into two questions to ensure clarity, and redrafting question 10 in Part II, Section 2. Most reviewers suggested removing question 18 and redrafting question 19 in Part II, Section 4. They suggested removing question 7 in Part II, Section 5, because of its ambiguity and adding a new question to ensure clarity. They suggested removing question 2 and redrafting question 3 in Part II, Section 6. At the end of face validity, the questionnaire had a total of 74 questions (consisting of 64 questions on knowledge and 10 questions on researchers’ attitudes toward research ethics education). After this, the questionnaire was sent for content validity (see Additional file 2).

Content validity

The panel of experts reviewed 74 questions (64 questions on the knowledge of human subjects’ rights and 10 questions on researchers’ attitudes toward research ethics education). For questions with a factual statement in the knowledge section, the panel suggested changing the answer options from “yes,” “no,” and “I don’t know” to “correct,” “not correct,” and “I don’t know.”

The content validity process resulted in the addition, revision, and redrafting of some questions according to the panel’s opinions. As a result of the panel’s input, the revised questionnaire contained 76 questions and 74 response items. At the end of content validity, the questionnaire had a total of 76 questions, consisting of 66 questions on knowledge and 10 questions on researchers’ attitudes toward research ethics education (see Additional file 3). Tables 23456 and 7 show the item-level content validity index I–CVI for the six sections in Part II of the questionnaire about knowledge (Sections 1–6) as 0.99, 0.99, 1.00, 0.97, 0.95, and 1.00, respectively. For attitudes (part III of the questionnaire), the I-CVI is 0.95 (Table 8).

Table 2 Fictitious rating on an 18-item scale by eight experts (Part II, Section 1)
Table 3 Fictitious rating on a 10-item scale by eight experts (Part II, Section 2)
Table 4 Fictitious rating on a 6-item scale by eight experts (Part II, Section 3)
Table 5 Fictitious rating on an 18-item scale by eight experts (Part II, Section 4)
Table 6 Fictitious rating on a 10-item scale by eight experts (Part II, Section 5)
Table 7 Fictitious rating on a 2-item scale by eight experts (Part II, Sect. 6)
Table 8 Fictitious rating on a 10-item scale by eight experts (Part II, Section 5)

Pilot study and questionnaire revision

A total of 30 doctors, nurses, pharmacists, and researchers participated in the pilot study and were not a part of the 301 researchers who participated in the questionnaire reliability assessment. The questionnaire consisting of 76 questions (66 questions on knowledge and 10 on their attitudes toward research ethics education) was distributed to the participants. Though the participants understood the questions, they proposed to reword and delete some questions. Finally, the experts suggested adding some questions to the knowledge part related to the IRB and safety reporting issues in clinical research.

Most respondents suggested to redraft question 9 in part II, Section 1 to ensure clarity, to delete question 4 in part II, Section 2, and to delete questions 8 and 11 because they were negatively worded questions causing confusion and suggested adding two new questions to ensure clarity. The respondents suggested modifying question No. 4 in Part II, Section 3, to ensure clarity and adding a new question. The majority of reviewers suggested that question 18 in Part II, Section 4 be removed, and question 10 be redrafted. They proposed modifying and redrafting questions 3, 5, and 9 in Part II, Section 5. At the end of the pilot study, the questionnaire consisted of 75 questions—65 questions on researchers’ knowledge and 10 on their attitudes toward research ethics education. These questions were used in the item analysis questionnaire (see Additional file 4).

Questionnaire reliability

A total of 301 participants completed the questionnaire. Of them, there were 78 (25.9%) consultants; 62 (20.6%) assistant consultants; 21 (7.0%) fellows; 18 (6.0%) residents; 31 (10.3%) pharmacists; 5 (1.7%) faculties; 51 (16.9%) nurses; 35 (11.6%) from other occupations. Moreover, 153 (50.8%) were males; 103 (34.2%) were aged between 31 and 40 years; 188 (62.5%) were Saudis; 144 (47.8%) had a master’s degree; 89 (29.6%) graduated from Saudi Arabia. Among the 301 participants, 235 (78.1%) had prior training in the ethics of protecting research subjects’ rights; 239 (79.4%) participated in research as principal investigators; 283 (94.0%) participated in research as co-investigators; 210 (69.8%) had knowledge of ethical guidelines. The median years of experience was 10 (IQR: 16–6) years, and the median number of publications was 14 (IQR: 27–6) articles. The demographic information and research background of the participants are shown in Table 9.

Table 9 Descriptive statistics of the study participants’ demographic and professional characteristics

The coefficient of split-half reliability for the resulting 65 knowledge questions was 0.755 for the items in the total questionnaire, demonstrating a good internal consistency and reliability of the questionnaire with an average total score of 52.4 ± 9.1 out of the maximum 65 points (80.6%). The Cronbach’s alpha for the researchers’ attitudes toward education of research ethics was 0.77 (95% CI: 0.73 to 0.81), demonstrating good reliability with an average total score of 42.5 ± 4.4 out of the maximum 50 points (85.0%), indicating a good perception of research ethics education. The results in Table 10 indicate most participants’ positive opinions on education about research subjects’ rights. Notably, the participants preferred face-to-face teaching methods in comparison to distance learning. Moreover, all items significantly contributed to acceptable internal consistency. However, if item 8.5 was deleted, the internal consistency reliability of the remaining 9 items would have led to a higher internal consistency reliability (i.e., 0.848), but we retained this item to discriminate between face-to-face and distance learning and to understand the extent to which the participants supported distance learning.

Table 10 The item deleted mean and Cronbach’s if item deleted for attitudes section

Therefore, all 65 knowledge questions met the criteria of reliability and were at acceptable levels of reliability like the 10 items on researchers’ attitudes toward education on research ethics. Thus, the pre-final questionnaire consisted of 75 items—65 knowledge items and 10 items on researchers’ attitudes toward education on research ethics.

Item analysis

Table 11 shows the item-difficulty and item-discrimination analysis of the knowledge items in the questionnaire. The results of the item analysis showed that most questions were at appropriate levels of difficulty and discrimination as per the guidelines in (Table 1). However, we found that five questions had unacceptable item difficulty and item discrimination (i.e., Q 2.2, Q 2.10, Q 2.15, Q 5.11, and Q 6.3). Questions 2.2 and 2.10 were retained because they measured the basic aspects of informed consent and received an I-CVI of 0.88 and 1.00 in content validity analysis, respectively. Question 2.15 was retained in the questionnaire because it is considered one of the additional elements of the informed consent that was taken from the Code of Federal Regulations (CFR) [18]. Question 5.11 was maintained because it measured knowledge of researchers’ responsibilities in clinical research; also, this question performed well in content validity and had an I-CVI of 1.00. These questions were important to ensure that respondents did not skim-read the question and randomly provide the answer (e.g., superficial answers) and reduce acquiescence and extreme response biases (i.e., Q2.2, Q2.10, and Q5.11). Question 6.3 was discarded from the questionnaire as it also had a low I-CVI (i.e., 75.0%). Therefore, at the end of item analysis, the final version of the questionnaire consisted of 74 question items, of which 64 were about researchers’ knowledge about human subjects’ rights and 10 were about researchers’ attitudes toward research ethics education (see Additional file 5).

Table 11 Item difficulty and discrimination

The current study also compared the test performance across some demographic and research backgrounds. The findings showed strong statistical differences in the average total knowledge score by occupations (p-value = 0.017). However, post hoc analysis indicated that consultants (53.1 ± 3.1, p-value < 0.001), assistant consultants (52.7 ± 4.1, p-value = 0.001), fellows (52.0 ± 3.2, p-value = 0.024), faculties (55.2 ± 3.6, p-value = 0.004), pharmacists (52.3 ± 6.1, p-value = 0.008), nurses (52.4 ± 3.8, p-value = 0.003), and other occupations (51.7 ± 5.1, p-value = 0.025) scored significantly-higher than residents (48.8 ± 6.9) as shown in Fig. 2, while other pairwise comparisons showed no significant statistical differences, indicating similar knowledge levels (p-values > 0.05). Furthermore, participants with prior training on human subjects’ rights had significantly higher average knowledge scores (52.6 ± 3.9) than those without prior training (51.3 ± 5.7, p-value = 0.040) as shown in Fig. 3. Significant positive correlations were observed between total knowledge score and research experiences (r = 0.163, p-value = 0.039) and number of published articles (r = 0.203, p-value < 0001). No significant statistical differences in average knowledge scores were found by other demographic characteristics.

Fig. 2
figure 2

Box plot of total knowledge score by occupation

Fig. 3
figure 3

Box plot of the total knowledge score by prior training

Discussion

Based on a rigorous methodology for developing a valid and reliable questionnaire, especially in terms of face and content validity, the current study demonstrated the validity of the current instrument and its ability to achieve measurement objectives [29, 30]. I-CVIs and S-CVI/UA (universal agreement) were used to determine content validity. All parts of the questionnaire that scored higher than the benchmark found in the literature showed high content validity [23, 31].

Split-half reliability coefficient analysis of question items on researchers’ knowledge of human subjects’ rights as per scientific literature guidelines demonstrated good internal consistency reliability [32, 33]. This method is considered the best alternative to test–retest reliability if the retest is not applicable [33, 34]. As for researchers’ attitudes toward research ethics education, the Cronbach’s alpha showed good internal consistency and reliability for this scale [35, 36].

The results of item analysis on question items on knowledge showed an appropriate level of difficulty for most questions [27, 28]. Furthermore, most questions had appropriate levels of discrimination, either according to the guidelines proposed by Oosterhof [27] or a biserial correlation greater than 0.20 [28]. However, five out of 65 questions had unacceptable item difficulty and item discrimination (i.e., Q 2.2, Q 2.10, Q 2.15, Q 5.11, and Q 6.3). Three questions (2.2, 2.10, and 5.11) were retained to avoid random answers and reduce acquiescent and extreme response biases. Questions 2.2 and 2.10 were used to measure the basic aspects of informed consent elements [37]; Question 2.15 was considered one of the additional elements in informed consent as per CFR [18], and Question 5.11 was used to measure the knowledge of researchers’ responsibilities in handling adverse events [38]. Question 6.3 was removed because it had both a low I-CVI and a low discrimination index.

The current study also looked at intergroup differences in overall test knowledge and found that participants’ occupations and their prior training status in research ethics had a strong statistical impact. More specifically, consultants, assistant consultants, fellows, faculties, pharmacists, nurses, and others (e.g., research center members) delivered higher performance than residents. This indicates that residents are more likely to face a higher gap of knowledge about research ethics than others. Moreover, participants who received prior training in research ethics delivered better performance in the knowledge test than those who did not. Cummings et al. indicated that participants who had previously received ethics training performed better than participants who had not [39]. Therefore, the current instrument could be used to address knowledge gaps in research ethics. Notably, the overall performance on knowledge of all questionnaire question items revealed a knowledge gap in various aspects of clinical research ethics, which confirmed previous findings [15, 24, 39,40,41]. Furthermore, the current results of the study showed that most participants supported the need for education in clinical research ethics and preferred face-to-face learning modules to distance learning courses. Further education in clinical research ethics would improve investigators’ performance in ethics knowledge tests [42,43,44,45].

The questionnaire developed in this study is useful in the assessment of knowledge gaps among biomedical researchers on different aspects of clinical research ethics and their attitudes toward education on research ethics. Furthermore, it can be used to design educational intervention programs and test their acceptability, as certain topics are prioritized to address (or overcome) knowledge of deficits in clinical ethics. If the main goal of clinical research ethics is to facilitate investigators’ knowledge of and ethical actions regarding human subjects’ rights, these intervention programs must teach them the most important skills and knowledge of clinical research ethics and test them to see how well they know and use these skills [45].

Finally, though this study was successful in developing and validating a tool to assess researchers’ knowledge about human subjects’ rights and their attitudes toward education on research ethics education in the biomedical field using rigorous methods and a reasonable sample size, it had some potential limitations. First, we could not conduct a test–retest reliability analysis, even though split-half reliability showed good internal consistency of the knowledge items. Second, this study was conducted at KFMC and might not be representative and generalizable among other biomedical researchers on either local or international levels. Third, this study could not assess the tool for criterion validity owing to the lack of available validated tools and gold standards in the cited literature. Finally, this instrument only assessed the knowledge of clinical research ethics, not behavior, skills, or practice.

Conclusion

This study has successfully developed a valid and reliable tool to assess researchers’ knowledge of human subjects’ rights and attitudes toward education on research ethics. The final version of the questionnaire included 64 question items on knowledge that covered contents in 6 main domains of research ethics and 10 items on attitudes toward research ethics education. This instrument could be useful in addressing gaps in the knowledge of human subjects’ rights and facilitating the development of educational intervention programs to set appropriate learning objectives. Other researchers are recommending continued efforts to use this novel tool to identify a robust questionnaire to measure the same construct. Finally, the methodology applied in this study can be utilized in developing similar assessment tools in clinical research ethics. This tool is valid to be used in European countries with minor changes.

Availability of data and materials

The datasets used and/or analysed during the current study are available from the corresponding author on reasonable request.

Abbreviations

CITI:

Collaborative Institutional Training Initiative

CVI:

The Content Validity Index

ECs:

Ethics Committees

GCP:

Good Clinical Practice

I-CVI:

Item-level Content Validity Index

ICH-GCP:

International Conference on Harmonization Good Clinical Practice

IQR:

Interquartile Range

IRBs:

Institutional Review Boards

KFMC:

King Fahad Medical City

KSA:

Kingdom of Saudi Arabia

NCBE:

National Committee of Bioethics

NIDA:

The National Institute on Drug Abuse

NIH:

National Institutes of Health

REC:

Research Ethics Committee

S-CVI:

The Scale-level Content Validity Index

S-CVI/UA:

Scale-level Content Validity Index, Universal Agreement calculation method

SD:

Standard Deviation

References

  1. Rosemann A. Alter-standardizing clinical trials: the gold standard in the crossfire. Sci Cult (Lond). 2019;28(2):125–48.

    Article  Google Scholar 

  2. Hasan N, Rana RU, Chowdhury S, Dola AJ, Rony MKK. Ethical considerations in research. J Nurs Res Patient Saf Pract. 2021;1(1):1–4.

    Google Scholar 

  3. Padulo J, Oliva F, Frizziero A, Maffulli N. Basic principles and recommendations in clinical and field science research: 2018 update. Muscles Ligaments Tendons J. 2018;8(3):305–7.

    Article  Google Scholar 

  4. Spellecy R, Busse K. The history of human subjectsresearch and rationale for institutional review board oversight. Nutr Clin Pract. 2021;36(3):560–7.

    Article  Google Scholar 

  5. Varkey B. Principles of clinical ethics and their application to practice. Med Princ Pract. 2021;30(1):17–28.

    Article  Google Scholar 

  6. National Institutes of Health. The Nuremberg Code. Available from: https://history.nih.gov/display/history/Nuremberg+Code#:~:text=The%20Nuremberg%20Code&text=The%20duty%20and%20responsibility%20for,delegated%20to%20another%20with%20impunity. Accessed 16 Jan 2023.

  7. World Medical Association. Declaration of Helsinki: ethical principles for medical research involving human subjects. JAMA. 2013;310(20):2191–4.

    Article  Google Scholar 

  8. Nair SC, Ibrahim H. GCP compliance and readability of informed consent forms from an emerging hub for clinical trials. Perspect Clin Res. 2015;6(2):104–8.

    Article  Google Scholar 

  9. National Committee of BioEthics (NCBE). Implementing regulations of the law of ethics of research on living creatures. Riyadh: King Abdulaziz City for Science and Technology; 2016.

    Google Scholar 

  10. Alahmad G, Al-Jumah M, Dierickx K. Review of national research ethics regulations and guidelines in Middle Eastern Arab countries. BMC Med Ethics. 2012;13(1):34.

    Article  Google Scholar 

  11. Alahmad G. The Saudi law of ethics of research on living creatures and its implementing regulations. Dev World Bioeth. 2017;17(2):63–9.

    Article  Google Scholar 

  12. Saudi Food and Drug Authority. Clinical Trials. 2013. Available from: http://www.sfda.gov.sa/en/drug/Clinical_Trials/Pages/default.aspx. Accessed 13 June 2016.

  13. Barber B. Constructing the social system. 1st ed. New York: Routledge; 2021.

    Book  Google Scholar 

  14. Mallela KK, Walia R, Tm CD, Das M, Sepolia S, Sethi P. Knowledge, attitudes and practice about research ethics among dental faculty in the North India. J Int Oral Health. 2015;7(Suppl 2):52–6.

    Google Scholar 

  15. Azakir B, Mobarak H, Al Najjar S, El Naga AA, Mashaal N. Knowledge and attitudes of physicians toward research ethics and scientific misconduct in Lebanon. BMC Med Ethics. 2020;21(1):39.

    Article  Google Scholar 

  16. El-Dessouky HF, Abdel-Aziz AM, Ibrahim C, Moni M, Abul Fadl R, Silverman H. Knowledge, awareness, and attitudes about research ethics among dental faculty in the Middle East: a pilot study. Int J Dent. 2011;2011:694759.

    Article  Google Scholar 

  17. Good Clinical Practice Network. ICH harmonised guideline integrated addendum to ICH E6(R1): Guideline for Good Clinical Practice ICH E6(R2). Available from: https://ichgcp.net/. Accessed: 16 Jan 2023.

  18. U.S. Food & Drug Administration. CFR - Code of Federal Regulations Title 21. Available from: https://www.accessdata.fda.gov/scripts/cdrh/cfdocs/cfcfr/CFRSearch.cfm?fr=50.25. Accessed: 16 Jan 2023.

  19. DeVon HA, Block ME, Moyle-Wright P, Ernst DM, Hayden SJ, Lazzara DJ, et al. A psychometric toolbox for testing validity and reliability. J NursScholarsh. 2007;39:155–64.

    Article  Google Scholar 

  20. Bolarinwa OA. Principles and methods of validity and reliability testing of questionnaires used in social and health science researches. Niger Postgrad Med J. 2015;22:195–201.

    Article  Google Scholar 

  21. Sangoseni O, Hellman M, Hill C. Development and validation of a questionnaire to assess the effect of online learning on behaviors, attitude and clinical practices of physical therapists in the United States regarding of evidence-based practice. Internet J Allied Health Sci Pract. 2013;11(2):1–12.

    Google Scholar 

  22. Vargas D, Luis MA. Development and validation of a scale of attitudes towards alcohol, alcoholism, and alcoholics. Rev Lat Am Enfermagem. 2008;16(5):895–902.

    Article  Google Scholar 

  23. Polit DF, Beck CT. The content validity index: are you sure you know what’s being reported? Critique and recommendations. Res Nurs Health. 2006;29(5):489–97.

    Article  Google Scholar 

  24. Frey BB, editor. The SAGE encyclopedia of educational research, measurement, and evaluation. Thousands Oaks: SAGE Publications, Inc.; 2018. p. 1573–1574.

  25. Cook DA, Beckman TJ. Current concepts in validity and reliability for psychometric instruments: theory and application. Am J Med. 2006;119(2):166.e7–16.

    Article  Google Scholar 

  26. Lord FM. The relation of the reliability of multiple-choice tests to the distribution of item difficulties. Psychometrika. 1952;17(2):181–94.

    Article  Google Scholar 

  27. Oosterhof A. Classroom applications of educational measurement. 3rd ed. Upper Saddle River: Prentice-Hall, Inc.; 2001.

    Google Scholar 

  28. Streiner DL, Norman GR. Health measurement scales: apracticalguide to their development and use. 2nd ed. New York: Oxford University Press; 1995.

    Google Scholar 

  29. Oluwatayo JA. Validity and reliability issues in educational research. J Educ Soc Res. 2012;2(2):391–400.

    Google Scholar 

  30. Taherdoost H. Validity and reliability of the research instrument; how to test the validation of a questionnaire/survey in a research. Int J Acad Res Manag. 2016;5(3):28–36.

    Google Scholar 

  31. van Teijlingen E, Hundley V. The importance of pilot studies. Nurs Stand. 2002;16(40):33–6.

    Article  Google Scholar 

  32. Chakrabartty SN. Best split-half and maximum reliability. IOSR J Res Method Educ. 2013;3(1):1–8.

    Google Scholar 

  33. Price P, Jhangiani R, Chiang I. Reliability and validity of measurement. In: Research methods in psychology. 2nd Canadian ed. Victoria: BCcampus; 2015. p. 96–102.

  34. Polit DF, Beck C. Essentials of nursing research. 6th ed. Philadelphia: Lippincott Williams; 2006.

    Google Scholar 

  35. Altman DG. Practical statistics for medical research. 1st ed. London: Chapman & Hall/CRC; 1991.

    Google Scholar 

  36. Cronbach LJ. Coefficient alpha and the internal structure of tests. Psychometrika. 1951;16:297–334.

    Article  Google Scholar 

  37. Engelhardt HT Jr. Core competencies for health care ethics consultants: In search of professional status in a post-modern world. HEC Forum. 2011;23(3):129–45.

  38. Feehan AK, Garcia-Diaz J. Investigator responsibilities in clinical research. Ochsner J. 2020;20(1):44–9.

    Article  Google Scholar 

  39. Cummings CL, Geis GM, Feldman HA, Berson ER, Kesselheim JC. Assessing ethics knowledge: development of a test of ethics knowledge in neonatology. J Pediatr. 2018;199:57–64.

    Article  Google Scholar 

  40. Cummings CL, Geis GM, Kesselheim JC, Sayeed S. Ethics and professionalism education during neonatal-perinatal fellowship training in the United States. J Perinatol. 2015;35(10):875–9.

    Article  Google Scholar 

  41. Tarboush NA, Alkayed Z, Alzoubi KH, Al-Delaimy WK. The understanding of research ethics at health sciences schools in Jordan: a cross-sectional study. BMC Med Educ. 2020;20(1):121.

    Article  Google Scholar 

  42. Kesselheim JC, McMahon GT, Joffe S. Development of a test of residents’ ethics knowledge for pediatrics (TREK-P). J Grad Med Educ. 2012;4(2):242–5.

    Article  Google Scholar 

  43. Kesselheim JC, Bhatia N, Cronin A, Kodish E, Joffe S. Knowledge of pediatric ethics: results of a survey of pediatric ethics consultants. AJOB Empir Bioeth. 2015;6(4):19–30.

    Article  Google Scholar 

  44. Ajuwon AJ, Kass N. Outcome of a research ethics training workshop among clinicians and scientists in a Nigerian university. BMC Med Ethics. 2008;9(1):1–9.

    Article  Google Scholar 

  45. Taylor HA, Kass NE, Ali J, Sisson S, Bertram A, Bhan A. Development of a research ethics knowledge and analytical skills assessment tool. J Med Ethics. 2012;38(4):236–42.

    Article  Google Scholar 

Download references

Acknowledgements

The authors would like to acknowledge the efforts of Mohsen H. Ayyash for his great support with the data statistics and Federico Germani for his manuscript review.

Funding

Not applicable.

Author information

Authors and Affiliations

Authors

Contributions

M.M. contributed to the study design, data collection, analysis, interpretation, and statistics of the data, and wrote the draft and the final manuscript. M.F. was involved in the study design and reviewed the manuscript. All authors have read and approved the final manuscript.

Corresponding author

Correspondence to May M. Al-Madaney.

Ethics declarations

Ethics approval and consent to participate

The study was approved by the Institutional Review Board of King Fahad Medical City (IRB log No. 19-240). All data were collected anonymously. No names or other identifiers were requested from participants to be included in the questionnaire. The online questionnaire was filled out and collected only after participants gave their consent by clicking on the continue button. Informed consent was obtained from all subjects and/ or their legal guardian(s).

Consent for publication

Not applicable.

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Additional file 1.

First draft of the questionnaire.

Additional file 2.

First revised questionnaire after face validity.

Additional file 3.

Second revised questionnaire after content validity.

Additional file 4.

Third revised questionnaire after pilot study.

Additional file 5.

Final version of the questionnaire.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Al-Madaney, M.M., Fässler, M. Development and validation of a tool to assess researchers’ knowledge of human subjects’ rights and their attitudes toward research ethics education in Saudi Arabia. BMC Med Ethics 24, 94 (2023). https://doi.org/10.1186/s12910-023-00968-z

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s12910-023-00968-z

Keywords