Strides in Development of Medical Education

Document Type : Original Article

Authors

1 MD MPH PhD. Department of E-learning in Medical Education, School of Medicine, Center for Excellence in E-learning in Medical Education, Tehran University of Medical Sciences, Tehran, Iran

2 MSc. Department of E-learning in Medical Education, School of Medicine, Center for Excellence in E-learning in Medical Education, Tehran University of Medical Sciences, Tehran, Iran

3 PhD. Department of E-learning in Medical Education, School of Medicine, Center for Excellence in E-learning in Medical Education, Tehran University of Medical Sciences, Tehran, Iran

Abstract

Background: Students' views play an important role in improving the quality of university services. They determine the degree of discrepancy between the status quo and students' expectations, providing the basis for improving educational service quality.
Objectives: This study aimed to design and evaluate the psychometric properties of a tool for assessing the quality of educational services in e-learning centers, based on the SERVQUAL model.
Methods: In the first step of this descriptive-analytical study, we searched for the instruments and factors affecting service quality in higher education in the literature and compiled them. Then, similar questions were merged, and all questions were rewritten, asking for experts' opinions on e-learning centers. In addition, some questions were added or removed based on expert opinions and the objective. The content validity ratio and content validity index also determine content validity. In addition, a sample of 110 students participated in the study to examine the construct validity and reliability of the instrument.
Results: In the first stage, 88 questions were extracted from related studies. After
omitting similar questions, 56 questions remained. These questions were modified for the e-learning environment to make them compatible with this system. Out of the 56 items assessed for CVI and CVR, 20 were ultimately approved with a test-retest reliability of 0.769 and an internal consistency of 0.824 to evaluate the quality of educational services in e-learning centers. This tool explained 67.2% of the total variance.
Conclusion: The present research showed that the developed tool has acceptable validity and reliability and can be used to assess the quality of educational services in e-learning centers from students' viewpoints.

Highlights

Rita Mojtahedzadeh: (Google Scholar) (PubMed)

Mohsen Mohammadzadeh: (Google Scholar) (PubMed)

Mitra Gharib: (Google Scholar) (PubMed)

Aeen Mohammadi: (Google Scholar) (PubMed)

Keywords

Background

In every society, many organizations and institutions provide various services to people from different walks of life. Among these organizations, educational services delivered by universities not only play an important role in students’ lives but also affect all parts of society (1). Universities and higher education institutions are considered the main centers of thought and science production in every society and play a significant role in the presence and activity of thinkers, scholars, and students in the community (2). Universities maximize productivity and efficient use of opportunities in the country through careful planning for educational, research, and human resources development. Besides, they improve the education quality following national and international scientific and educational developments (3).

The quality of services is one of the factors contributing to the success and sustainability of any organization, including the higher education system. Its sustainable development requires simultaneous and balanced growth of quantitative dimensions. Quality in Higher Education is a multidimensional concept influenced by several criteria, including the educational status, academic system, curricula, institutional conditions, and standards of the academic discipline (4). Since service satisfaction is the distinguishing element among organizations in the competitive market, student satisfaction is a determining factor in evaluating the quality of services provided by higher education organizations (5). Improving the quality of education and research, alongside providing specialized educational services, are the ultimate goals of any educational system. Hence, failing to achieve these goals results in wasting economic resources and reducing self-esteem and instability in the students’ personal and social temperament (6).

Meanwhile, Information and Communication Technology (ICT) provides the opportunity to combine the art and the science of teaching and education to create an approach called e-learning (7). E-learning has ascertained many educational goals, including synchronous learning, cooperative learning, self-assessment, and self-direction. Today, e-learning in higher education is at a rapid development stage; however, it encounters problems like increasing demand for services and competition among educational centers. Hence, it is essential to improve its efficiency by adopting appropriate solutions based on evaluation evidence (8). E-learning is an online system that facilitates teaching and learning with the advent of various web-based software tools and applications. This modern educational approach incorporates asynchronous and synchronous tools to support learning processes. E-learning enhances knowledge acquisition, clinical skill development, and learner satisfaction across various domains in medical education (9-11).

Program evaluation reveals the existing and desired status and the strengths, weaknesses, opportunities, and obstacles of e-learning systems. Students play a vital role in enhancing the quality of service and evaluating educational systems as they are the main clientele and recipients of system services. Moreover, their perspectives are the key indicator of a university’s overall quality (7, 12).

One method used to evaluate the quality of universities’ services is the SERVQUAL model developed by Parasuraman (13). This tool measures students' perceptions in five different dimensions of service delivery including; tangibility (facilities, equipment, staff, faculty members, and communication channels), reliability (responsibility to do services reliably), responsiveness (tendency to cooperate and assist students), assurance (competence and ability of staff and faculty members to create a sense of trust and confidence among students), and empathy (special treatment to every single student regarding their moods to convince them to realize that university system can understand them) (14). This tool examines service quality by comparing students’ expectations and perceptions in each dimension. Parasuraman et al define the quality of service as the difference between clients’ expectations and perceptions of services, known as the expectations-perception gap. This definition is represented by the equation (Quality = Perception-Expectation). Clients’ expectations are their beliefs about the provided services that work as reference points when evaluating the services. The purpose of the customer's perception is to find out how one evaluates the service received, i.e., the customer's judgment of the service. Therefore, perceived quality is a form of insight related to satisfaction and results of comparison between expectations and perceptions of performance (8). A negative score of the gap indicates that perception is lower than expectations, and students are not satisfied with the services. On the other hand, a positive score shows that the provided services are better than expectations (14).

Nowadays, the Internet has diminished the necessity of physical presence on university campuses in many instances. Consequently, governments are encouraging the adoption of e-learning solutions. Hence, the number of higher education institutes implementing strategies and policies for integrating e-learning technology into their educational processes is increasing. Therefore, ensuring the quality of e-services from different viewpoints, including students, is necessary to provide effective e-learning systems (15, 16).

Despite the evaluations made at different universities based on the SERVQUAL model, we did not find any research examining the quality gap of e-learning services (17). There are different dimensions across different service industries and even within the same service industry. Moreover, their customers are heterogeneous. So, a customized SERVQUAL model for specific contexts is beneficial (18). Hence, regarding the importance of educational service quality, the need for high-quality online educational services, and the essence of developing a customized SERVQUAL tool, this study aimed to develop a valid and reliable tool for evaluating the quality of educational services in e-learning centers.

Objectives

This study aimed to design and evaluate the psychometric properties of a tool for assessing the quality of educational services in e-learning centers, based on the SERVQUAL model.

Methods

This descriptive-analytical study started with a review of the literature on the quality of educational services based on the SERVQUAL model and gathered the questionnaires and factors affecting service quality in higher education. Then, we compiled all the questionnaires’ items, reviewed them carefully, and omitted or merged similar items. We conducted the psychometric assessment methodology for this draft of the questionnaire to ensure its validity and reliability.

Instrument development: In the first stage of this research, 12 studies addressing the questionnaires and factors regarding the quality of educational services based on the SERVQUAL model were collected (1, 3, 5, 6, 13, 14, 17, 19-23). Then, their items were collected and indexed. 88 questions were extracted from these studies. After omitting similar or irrelevant questions, 56 questions remained. An expert panel modified the items to make them compatible with an e-learning environment. The experts consisted of four e-learning and educational specialists and five university teachers with more than three years of experience in e-teaching. Modification of the questions was performed in two rounds of the expert panel.

In the next step, the face validity of the instrument was assessed by other experts, including five faculty members with higher education/e-learning expertise or with at least 3 years of e-teaching experience, and also seven junior and senior e-students. To assess the content validity, Content Validity Ratio (CVR) and Content Validity Index (CVI) were applied To determine the CVR, experts evaluated each item on a three-part scale of "necessary", "useful but not necessary," and "not necessary". According to Lawshe, C. H. (1975) formula, if the calculated value for each question was greater than 0.57, the content validity of that question was then accepted. The Waltz & Bausell method was also applied to determine the CVI (24). For each question, three items of relevance (not relevant, relatively relevant, relevant, and completely relevant), simplicity (not simple, relatively simple, simple, and quite simple), and clarity (not clear, relatively clear, clear, and very clear) were questioned. The scores for the options of "completely relevant" and "relevant", " simple " and " quite simple "," clear " and " very clear" were added up together. Then the overall scores were divided by the number of experts. If the calculated value for each question was higher than 0.79, it was considered acceptable. The final instrument consisted of 20 pairs of questions in two parts to measure students' expectations and perceptions of the quality of educational services. The instrument items were based on a five-point Likert scale, i.e., “quite important”, “important”, “relatively important”, “less important” and “the least important” for the expectations section; and “very good”, “good”, “average”, “poor” and “very poor” in the perceptions section. It should be noted that each item was scored between 1 and 5. The tool was named Service Quality in E-Learning Centers (SQELC).

Exploratory Factor Analysis was also administered to examine the construct validity (25).

 To assess the reliability of the tool, the test-retest approach was applied, and Cronbach's alpha was used to evaluate its internal consistency.

Study population and sampling strategy: The statistical population was students of the Tehran University of Medical Sciences (TUMS) who studied in virtual post-graduate programs. All majors and courses in these programs were delivered through virtual and blended approaches.

The required sample size for assessing reliability through test-retest was 20 participants, while the recommended sample size for evaluating construct validity ranges from 100 to 200 participants. Therefore, all eligible students were included in the study using a census sampling method. There are several methods to determine the adequacy of the sample size for factor analysis; however, the most widely accepted and reliable approach is the calculation of the Kaiser-Meyer-Olkin (KMO) index. A KMO value greater than 0.7 indicates sufficient sample adequacy to conduct factor analysis (26, 27). All analyses were performed in IBM SPSS Statistics for Windows, version 17 (IBM Corp., Armonk, N.Y., USA).

Results

Content Validity: Out of the 56 items assessed for CVI and CVR, 20 items were ultimately approved (Table 1).

Table 1. CVI and CVR of the approved items

Item No.

Items

CVI

CVR

1

Facilitating discussion and exchanging ideas in the LMS* and virtual classroom by instructors

1

0.70

2

Providing sufficient curriculum resources in the LMS to increase students' specialized knowledge

1

1

3

Encouraging students' creativity and considering their achievements

0.85

0.75

4

Creating a sense of trust for students to receive services and correcting staff behavior

0.85

0.85

5

Appropriate timing when providing resources and assignments to students in the LMS

1

1

6

Positive attitude of staff towards students

0.85

1

7

Providing a quiet and convenient place for studying in the center

1

0.80

8

Realizing students’ needs by the staff

1

1

9

Assigning suitable assignments regarding the lessons in the LMS

1

1

10

Respectful behavior and attitude of center managers toward students

1

1

11

Providing curriculum in a systematic and interconnected manner in the LMS

1

1

12

Providing comprehensible and usable e-content for the student

1

1

13

Evaluating and providing feedback on student assignments

0.85

0.80

14

Electronic recording of students’ academic profiles

0.80

0.70

15

Teachers' virtual teaching skills and experience

1

1

16

Being a feedback system to improve the quality of service of the faculty

1

1

17

Introducing and announcing the faculty services clearly

0.80

1

18

Providing a constant connection to high-speed internet

1

0.85

19

Being attractive guidelines and instructions in virtual systems

1

1

20

Providing appropriate hardware and software facilities

1

1

Construct Validity: The questionnaire was delivered to 110 students of virtual programs at TUMS to conduct Exploratory Factor Analysis (EFA). 72 out of 110 students (47 women and 25 men) filled the questionnaire (response rate: 65.5%). The descriptive statistics and characteristics of the participants are presented in Table 2.

Table 2. Characteristics of the study participants

Categories

Value

Sex: N (%)

 

Male

25 (34.7)

Female

47 (65.3)

Age: Mean (SD)

37.21 (7.60)

Degree: N (%)

 

E-learning in Medical Education

38 (52.8)

Educational Technology in Medical Sciences

17 (23.6)

Medical Librarianship and Information Science

3 (4.2)

Medical Education

14 (19.4)

Kaiser Mayer Olkin Index (KMO) at EFA was 0.773, indicating a sufficient sample size. Bartlett's Test of Sphericity (chi-square = 695.89, P < 0.001) also showed the suitability of samples for factor analysis. Five factors were extracted according to the design background, and the Varimax rotation method was applied to determine the items of each factor. In addition, coefficients less than 0.45 were omitted. According to the results, these five factors explained 67.17% of the variance, which is an acceptable value (Table 3).

Table 3. Extracted components and Total Variance Explained for SQELC tool

Component

Rotation Sums of Squared Loadings

% of Variance

Cumulative %

1

21.852

21.852

2

12.864

34.716

3

12.647

47.362

4

11.260

58.622

5

8.544

67.166

The scatter plot (Figure 1) also confirms the suitability of the 5-factor solution.

Table 4 shows the rotating factor matrix. Although it is recommended that factors consist of at least three items to ensure internal consistency and stability, there are instances in the literature for retaining two-item factors when they demonstrate strong conceptual coherence and satisfactory factor loadings (26). In our analysis, both items within Factor 5 loaded exclusively on this factor, suggesting a distinct underlying construct. Furthermore, the content of these items reflects a specific, meaningful domain within the broader construct being measured.

According to the results of factor analysis and research background, the same names of factors of the original questionnaires were chosen. The names and questions of the factors and the internal consistency of each factor are presented in Table 5.

Reliability: The reliability of the instrument was examined through the test-retest method. Reliability in a 20-person sample after two weeks of replication, using the Interclass Correlation Coefficient (ICC), was 0.769. Cronbach's alpha for all items also showed 0.824 internal consistency for the whole instrument. The results of the questionnaire analysis among participants are presented in Table 6.

Table 4. Factor loadings of items on extracted components after varimax rotation in exploratory factor analysis

Items

Component

1

2

3

4

5

1

 

 

0.692

 

 

2

 

 

0.463

 

 

3

 

 

 

 

0.577

4

 

 

 

0.736

 

5

 

 

0.593

 

 

6

0.647

 

 

 

 

7

 

 

 

0.641

 

8

0.659

 

 

0.478

 

9

 

0.575

 

 

 

10

 

 

 

0.741

 

11

 

0.855

 

 

 

12

 

0.834

 

 

 

13

0.525

0.469

 

 

 

14

0.807

 

 

 

 

15

 

 

0.486

 

 

16

 

 

 

 

0.779

17

0.788

 

 

 

 

18

 

 

0.743

 

 

19

0.760

 

 

 

 

20

0.737

 

 

 

 

Extraction Method: Principal Component Analysis.

Rotation Method: Varimax with Kaiser Normalization

 

Table 5. Components' names, their items, and internal consistencies

Component

Name

Items

Cronbach's alpha

1

Responsiveness

6, 8, 13, 14, 17, 19, 20

0.887

2

Assurance

9, 11, 12

0.842

3

Reliability

1, 2, 5, 15, 18

0.716

4

Empathy 

4, 7, 10

0.401

5

Tangibility

3, 16

0.345

 

Table 6. Comparing expectations and perceptions of quality services in E-Learning Schools using SQELC

Factors

Expectations

Perception

Gap

T

P

Responsiveness

29.71

21.87

-7.84

12.940

< 0.001

Assurance

14.33

10.99

-3.35

11.873

< 0.001

Reliability

23.44

18.23

-5.22

8.532

< 0.001

Empathy 

12.75

11.46

-1.29

4.373

< 0.001

Tangibility

8.96

6.30

-2.66

7.727

< 0.001

Discussion

This study aimed to design a service quality assessment instrument for e-learning centers based on the SERVQUAL model and evaluate its psychometric properties. The final tool for evaluating service quality in e-learning centers consisted of 20 items and five extracted factors named the same as the main SERVQUAL model.

E-learning is an instructional strategy to enhance teaching-learning processes through web-based systems, software tools, and applications (28). Specifications of e-learning, such as being student-centered and self-directed, and its capabilities for deploying different types of interaction, have made it a persistent and effective approach in which a wide range of asynchronous and synchronous tools are used (29, 30). Differences between e-learning and face-to-face instruction show that the dimensions and factors affecting their quality are different. E-learning aspects can be divided into system design, system delivery, and system outcome, which should be considered to measure their quality (22).

Positive attitude towards students and realizing their needs, providing effective feedback, and appropriate facilities such as appropriate hardware and software, guidelines, and actively announcing them are the main parts of the factor "responsiveness". The factor "assurance" consists of the appropriate delivery of assignments, curriculum, and e-content through media such as learning management systems. Providing sufficient and timely resources, assignments, communication facilities like discussion groups, experienced instructors, and access to reliable internet are composed of the factors of "Reliability". Sense of trust for students, preparing an expedient environment, and respectful behavior of managers are considered as "Empathy". Finally, inspiring students' creativity and considering their achievements, and preparing a feedback system for improving the quality of services are classified as "tangibility".

Sugant determined four dimensions for e-learning services to be evaluated as service quality: Content (design and presentation, structure, completeness), usability (attractive interface, ease of navigation, interactivity, progress tracking), technology (fast, reliable, support), and responsiveness (assessment and evaluation) (31).

In SQELC, items 2, 5, 9, 11, and 12 cover components of the dimension "content"; items 1, 13, 14, and 16 cover components of the dimension "usability"; items 18 and 20 are related to "technology " and items 3, 15, and 16 are according to the dimension "responsiveness".

Ayuni and Mulyana (32) assessed the relationship between service quality and satisfaction, and loyalty of the students in the e-learning system. They used four dimensions for e-learning service quality, consisting of teaching dimension, administrative services, support services, and system, which have been developed previously (33). Items 5, 7, 10, and 17 are according to administrative services, items 2, 9, 11, 12, 13, and 15 are according to teaching dimension, items 3, 6, 7, 8, 10, 17, 18, and 19 are compatible with support services, and finally, the dimension of "system" can be evaluated through items 14, 16, 17, 18, and 20.  It is visible that some items overlap between different dimensions.

The results of SQELC can lead to the analysis of deficiencies in e-learning centers. It also provides conductive solutions to reduce the observed quality gaps in educational services. The results obtained from the Virtual School of TUMS showed that perceptions and expectations were close in all aspects of the quality of educational services, including accountability, reliability, assurance, and empathy. Although this gap was negative in all dimensions, in other words, students' expectations were beyond their perceptions of the actual status quo; the difference was not significant in terms of effect size (gap size), while its statistical significance may be because of the high sample size. The existence of a quality gap in the dimension of accountability can indicate students’ dissatisfaction regarding unclear announcements of the school or even its inadequacy in terms of providing feedback on students’ assignments in the virtual setting. This illustrates the need for instructors and professors to engage with students and provide feedback on their homework. The style and type of implementation of e-courses, where students have less access to teachers and where traditional classes are not available, doubles the importance of providing feedback on students' homework. Quality of service in terms of reliability means the ability to perform the service reliably and or act with commitment. The service gap in this dimension can reflect students' dissatisfaction with the quality of the Internet and how they interact in the LMS environment, such as the expected time for students to deliver homework, the provision of tutorials, and even facilitating discussions in the virtual classroom by instructors. The dimension of assurance in SQELC indicates the competence and ability of university staff and faculty members to induce a sense of trust. The negative gap in performed services in this dimension indicates that, from the student's point of view, assignments are not appropriate and lesson-related, and the school has failed to provide confidence among students to make them feel safe and secure. The tangible and physical dimension means that the conditions and environment of the setting are available to provide services, including facilities, equipment, staff, and communication channels.

In addition, the empathy (the factor 4 with three items) means a personalized attitude toward one’s mood. In this case, it means that students believe that the university staff perceive each individual of them. Proper behavior and respectful treatment towards students can bring students' expectations up to the level of their perception in this dimension. The negative gap in service quality in the empathy dimension indicates that students do not have appropriate mechanisms to express their views and suggestions.

Limitations: One limitation of this study is that it focuses solely on one university. It is recommended that the SQELC results be evaluated across multiple medical universities to better understand the accuracy of its results. Additionally, the available sample size was not satisfactory. Although the statistical indices showed their adequacy, it is recommended to apply the tool to a large population. Furthermore, investigating the optimal results for different academic fields would be valuable.

Conclusion

In this study, SQELC, an instrument of service quality assessment for e-learning centers, was devised. This instrument uses student feedback to evaluate service quality. It is suggested that other stakeholders, such as teachers and staff, also be considered in plans. The results obtained in evaluating the quality of services in e-learning centers are considered as guidelines for management, instructors, and staff to be informed of the strengths and weaknesses of the services so that they can improve the quality of educational services by removing the weaknesses as well as improving the strengths.

  1. Goumairi O, Aoula ESĆ, Ben Souda S. Application of the SERVQUAL Model for the Evaluation of the Service Quality in Moroccan Higher Education: Public Engineering School as a Case Study. International Journal of Higher Education. 2020;9(5): 223-9. doi:5430/ijhe.v9n5p223.
  2. Rezaiyan MK, Bazaz SMM. Quality Gap in educational services based on SERVQUAL Model in Mashhad Medical School. Research on Medicine. 2016;40(1):17-23. [In Persian]
  3. Ko CH, Chou CM. Apply the SERVQUAL instrument to measure service quality for the adaptation of ICT technologies: A case study of nursing homes in Taiwan. Healthcare (Basel). 2020 Apr 24;8(2):108. doi: 3390/healthcare8020108. [PMID: 32344589] [PMCID: PMC7349199]
  4. Martínez-Argüelles M, Castán J, Juan A. How do Students Measure Service Quality in e-Learning? A Case Study Regarding an Internet-Based University. Electronic Journal of e-Learning. 2010; 8(2): 151 -60.
  5. Aghamirzaee Mahali T, Babazadeh M, Rahimpour kami B, Salehi Omran A. Assessment and Ranking of Educational (Administrative) Services Quality from Students Opinion (A case Study on Mazandaran University of Science and Technology). Educ Strategy Med Sci 2017; 10 (4): 288-301. [In Persian]
  6. Gilavand A, Maraghi E. Assessing the quality of educational services of Iranian universities of medical sciences based on the SERVQUAL evaluation model: A systematic review and meta-analysis. Iran J Med Sci. 2019 Jul;44(4):273-284. doi: 30476/IJMS.2019.44946. [PMID: 31439970] [PMCID: PMC6661524]
  7. Li CY, Asimiran S, Suyitno S. Students’ expectations and perceptions on service quality of e-learning in a selected faculty of a public university in Malaysia. Proceedings of the 3rd International Conference on Educational Management and Administration; 2018 Oct 6-7; Malang, Indonesia. 2018: 85-90.
  8. Samir Roushdy A, El-Ansary O. Measuring Students’ Perception of E-SERVQUAL at E-learning Institutions: Evidence from Egypt. Scientific Journal for Economic& Commerce. 2017;1;47(2): 583-614. doi:21608/jsec.2017.40514.
  9. Mojtahedzadeh R, Mousavi A, Shirazi M, Mohammadi A. Factors creating an educational atmosphere in cyberspace: a qualitative study. Strides Dev Med Educ. 2017 May 1;14(2): e66898.
  10. Mousavi A, Mohammadi A, Mojtahedzadeh R, Shirazi M, Rashidi H. E-Learning Educational Atmosphere Measure (EEAM): A New Instrument for Assessing E-Students' Perception of Educational Environment. Research in Learning Technology. 2020;28: 1-12. doi:25304/rlt.v28.2308.
  11. Liu Q, Peng W, Zhang F, Hu R, Li Y, Yan W. The Effectiveness of Blended Learning in Health Professions: Systematic Review and Meta-Analysis. J Med Internet Res. 2016 Jan 4;18(1): e2. doi: 2196/jmir.4807. [PMID: 26729058] [PMCID: PMC4717286]
  12. Handrinos MC, Folinas D, Rotsios K. Using the SERVQUAL model to evaluate the quality of services for a farm school store. Journal of Marketing and Consumer Behaviour in Emerging Markets. 2015; 1 (1): 62-74. doi: 7172/2449-6634.jmcbem.2015.1.5.
  13. Đonlagić S, Fazlić S. Quality assessment in higher education using the SERVQUALQ model. Management: Journal of Contemporary Management Issues. 2015;20(1):39-57.
  14. Berry LL. SERVQUAL: A multiple-item scale for measuring consumer perceptions of service quality. Journal of Retailing. 1988;64(1):12-40.
  15. Ithnin F, Sahib S, Eng CK, Sidek S, Harun RN. Mapping the Futures of Malaysian Higher Education: A Meta-Analysis of Futures Studies in the Malaysian Higher Education Scenario. Journal of Futures Studies. 2018 Mar 1;22(3): 1-18. doi:6531/JFS.2018.22(3).00A1.
  16. Mohammadi A, Norouzadeh R, Hosseini SE, Mohammadi M, Tahrekhani M. Investigating the Effectiveness of Online Learning in the Post-Coronavirus (COVID-19) Era from the Perspectives of Students and Faculty Members of Selected Universities of Medical Sciences in Iran. Strides Dev Med Educ. 2025; 22(1): e1447. doi: 22062/sdme.2025.200378.1447.
  17. Uppal MA, Ali S, Gulliver SR. Factors determining e‐learning service quality. British Journal of Educational Technology. 2018;49(3):412-26. doi: 1111/bjet.12552.
  18. Park SJ, Yi Y, Lee YR. Heterogeneous dimensions of SERVQUAL. Total Quality Management & Business Excellence. 2021;32(1-2): 92-118. doi:1080/14783363.2018.1531700.
  19. Hinkin TR. A review of scale development practices in the study of organizations. Journal of management. 1995;21(5):967-88. doi: 1177/014920639502100509.
  20. Hosseini SM, Vakili V, Mosa Farkhani E. Comparing pharmacy students’ perceptions and expectations of quality of educational services at Mashhad University of medical sciences based on SERVQUAL model. Iran J Med Educ. 2017;17: 504-15. [In Persian]
  21. Buditjahjanto IG. Customer Satisfaction Analysis Based on SERVQUAL Method to Determine Service Level of Academic Information Systems on Higher Education. Khazanah Informatika: Jurnal Ilmu Komputer dan Informatika. 2020;6(2): 103-8. doi: 23917/khif.v6i2.10690.
  22. Abd Rahman NA. E-learning service quality case study: Al Madinah International University. (dissertation(. Malaysia: Universiti Teknologi MARA (UiTM); 2017.
  23. Al-Mushasha NF, Nassuora AB. Factors determining e-learning service quality in Jordanian higher education environment. Journal of Applied Sciences (Faisalabad). 2012;12(14):1474-80.
  24. Lawshe CH. A quantitative approach to content validity. Personnel Psychology. 1975; 28(4): 563-75. doi:1111/j.1744-6570.1975.tb01393.x.
  25. Raj GM, Adhimoolam M. A Novel Questionnaire to Assess the Knowledge, Attitude, and Practice of Medical Professionals Regarding Participation in Scientific Programs. Strides Dev Med Educ. 2024;21(1):156-65. doi:22062/sdme.2024.199185.1305.
  26. Hair JF, Black WC, Babin BJ, Anderson RE. Multivariate data analysis. 8th ed. Boston: Cengage; 2019: 136.
  27. Shokoohi S, Emami AH, Mohammadi A, Ahmadi S, Mojtahedzadeh R. Psychometric properties of the Postgraduate Hospital Educational Environment Measure in an Iranian hospital setting. Med Educ Online. 2014 Aug 8:19:24546. doi: 3402/meo.v19.24546. [PMID: 25109351] [PMCID: PMC4127829]
  28. Cassidy S. Virtual learning environments as mediating factors in student satisfaction with teaching and learning in higher education. Journal of Curriculum and Teaching. 2016;5(1): 113-23. doi:5430/jct.v5n1p113.
  29. Hampel G, Dancsházy K. Creating a virtual learning environment. Journal of Agricultural Informatics. 2014;5(1): 46-55. doi:17700/jai.2014.5.1.124.
  30. Bdiwi R, de Runz C, Faiz S, Ali-Cherif A. Smart learning environment: Teacher’s role in assessing classroom attention. Research in Learning Technology. 2019;27: 1-14. doi:25304/rlt.v27.2072.
  31. Sugant R. A framework for measuring service quality of e-learning services. Proceedings of the 3rd International Conference on Global Business, Economics, Finance and Social Sciences; 2014 Dec 19-21; Mumbai, India. 2014: 19-21.
  32. Ayuni D, Mulyana A. Applying service quality model as a determinant of success in E-learning: The role of institutional support and outcome value. Review of Integrative Business and Economics Research. 2019;8:145-59.
  33. Martinez-Arguelles MJ, Callejo MB, Farrero JM. Dimensions of perceived service quality in higher education virtual learning environments. International Journal of Educational Technology in Higher Education. 2013;10(1):2685.