Document Type : Original Article
Authors
1 M.A in Educational Psychology, Department of Counseling and Educational Psychology, Faculty of Education and Psychology, Ferdowsi University of Mashhad, Mashhad, Iran
2 Professor, Department of Counseling and Educational Psychology, Faculty of Education and Psychology, Ferdowsi University of Mashhad, Mashhad, Iran
3 Associate Professor, Department of Computer Engineering, Faculty of Engineering, Ferdowsi University of Mashhad, Mashhad, Iran
Abstract
Background: The goal of research in any country is scientific growth, with a key focus on empowering students to conduct high-quality research. This requires keeping pace with the latest advancements in the world, but in a principled manner by increasing students' motivation as their driving force for conducting research. This can only be achieved through proper training in utilizing these advancements, relying on the researcher themselves.
Methods: Initially, the opinions of at least 10 artificial intelligence experts were collected using the Delphi method over three rounds, along with the opinions of 10 psychology experts over two rounds. The points obtained from the open-ended questions in the first round were transformed into questionnaires for each field, and this process continued until consensus was reached. Subsequently, based on the results from the previous stage and the theoretical foundations of social-cognitive theory, a five-session protocol was developed and confirmed by 10 experts from both fields for face and content validity.
Results: The results obtained from the Delphi rounds contained 34 points regarding the training of using artificial intelligence tools, including a general introduction to the concept of chatbots and language models, introduction of general and specialized tools, correct and principled prompt writing, displaying errors, and cross-checking with scientific texts. It also included creating a proper culture regarding artificial intelligence as a research assistant rather than a research author, accepting ethical and content responsibility by the researcher, stating the policies of reputable global journals, and the necessity of adhering to them. Additionally, 44 points related to aligning education with social-cognitive theory, such as step-by-step training with visuals and personalization, the necessity of goal-setting and planning, utilizing the experiences of capable role models, and applying the taught materials were identified, along with 15 tools such as Chat GPT, Scopus AI, NotebookLM, and Scite AI. Despite the high content validity of this protocol (via 10 experts, CVR and CVI), the credibility of the Delphi results depends on the scientific credibility of its members. Since all participants in this research were faculty members of reputable universities, the high credibility of the obtained results can be emphasized.
Conclusion: This educational program, considering the foundations of social-cognitive theory and its goals of self-regulation, self-efficacy, and research engagement of graduate students, can serve as an effective intervention for the future of research in the country, in line with global advancements and their proper utilization.
Keywords
Background
Artificial intelligence is a broad field of computer science that aims to make computers intelligent enough to think, learn, and perform human tasks, cognitive functions, and human communications, thereby helping individuals to enjoy a better life without hard work (1). Universities, or higher education in general, are among the areas where artificial intelligence has penetrated, and despite its numerous advantages in education (2, 3), it has also assisted students in research. Research, as a significant factor in increasing and solidifying knowledge, is a process that involves a series of steps for collecting and analyzing information (4). A clear example of this can be seen in the final research report of graduate students titled a thesis, where timely completion is economically significant and holds personal satisfaction and other considerations (5). Delays or failure to complete it also bring about financial and psychological issues (6). However, a significant percentage of students have a negative attitude towards research (7) due to various challenges, such as research anxiety (8), writing problems (9), reading articles and conducting research (10), access to databases (11), lack of time and knowledge (12, 13), which makes appropriate training necessary to create a positive attitude (13).
In this context, numerous studies have shown that AI-based tools possess features such as plagiarism detection, literature review, finding related articles (14), language correction tools, and time savings (15). However, challenges such as dependency on these tools, devaluation of traditional skills (16), originality and quality of authorship, blurring the lines between human and machine creative work, and ethical considerations (17) also exist. Using AI tools is not academic misconduct but a modern method of writing (16). However, it is crucial to note that artificial intelligence is a complementary tool and does not replace the researcher and their cognitive and creative processes (18). As mentioned, many barriers to conducting research also relate to the researcher themselves and their motivation and knowledge. Therefore, considering the importance of the individual, it is necessary to view the researcher and research, which is a type of behavior, from a psychological framework as well.
One of the most important theories in psychology is Bandura's social-cognitive theory, which considers human behavior and performance as the result of the reciprocal interaction of behavior, person, and environment, referred to as reciprocal determinism. This theory attributes a significant portion of behavior to the individual themselves. It states that the influence of the environment on the person and behavior is not solely through experiences in social or physical environments but can also occur through observation, which is referred to as observational learning. Another important construct is self-regulation, which indicates that individuals have the capability for self-direction and play a crucial role in regulating and determining their behavior. To achieve their goals, they must choose close and self-motivating plans and guidelines (19). Zimmerman also believed that personal factors, including self-efficacy; behavioral factors, such as self-observation, self-judgment, and self-reaction; and environmental factors, including social support and contextual structure, influence self-regulation (20). Self-efficacy is another important construct that is central to human agency (19). This means that, in addition to having knowledge and skills, an individual's judgment about their ability to perform a specific task and achieve goals is also important. This relates to the individual's judgment with any skill, which is influenced by four sources: actual experiences, vicarious experiences, verbal persuasion, and physiological states (19, 21). According to this theory, being influenced by more than three pillars, or in other words, any factor affecting behavior, including self-regulation and self-efficacy, leads to greater engagement and more active participation in that activity (19).
Therefore, according to social-cognitive theory in the realm of research regarding personal factors, one can refer to motivations, critical thinking, self-efficacy, self-regulation, and even emotional states such as stress; environmental factors include deadlines, feedback, citation databases, libraries, laboratories, and behaviors related to searching for articles, summarizing, collecting and analyzing, writing, and reporting findings. Also, one can learn through observing professors, peers, and even reading articles from other researchers instead of through trial and error. Furthermore, a study on self-regulation indicated that all stages of research require planning and reviewing potential outcomes (22), which can be understood as the motivational, behavioral, metacognitive, and environmental capabilities of the researcher in starting work and employing strategies such as organizing, monitoring, and self-assessing to achieve their goals (23). Self-efficacy, another important factor in research (24), is defined as the individual's trust, belief, and confidence in their ability to successfully perform research-related tasks (25). Research engagement refers to active cognitive, behavioral, and emotional involvement in research, characterized by experiences of energy, selflessness, and immersion in studies (26).
Studies have shown that artificial intelligence can enhance self-regulation through ideographic methods (27), support metacognitive, cognitive, and behavioral processes (28), and utilize its strategies (29). Increased self-efficacy in programming (30) and writing, as well as positive emotional experiences, have also been other impacts of artificial intelligence (31), leading to increased engagement and motivation among learners (32-34). However, these studies have often focused on education, with limited research in the area of research.
For instance, studies have pointed to the application of tools such as Chat GPT (35), Perplexity, Grammarly (36), and Semantic Scholar (37) in conducting research, but these studies often focus on the capabilities and limitations of these tools or their mere application. Additionally, several studies have indicated that texts generated by these tools lack depth (36), may contain fabricated references (38), pose a risk of dependency on them (39), and thus require review and editing (40), and should not replace the researcher’s judgment (35), emphasizing that maintaining the accuracy and objectivity of the researcher is vital (41). Moreover, studies that have employed AI-based tools in the form of experimental research often lack a theoretical framework (31).
In summary, considering the importance of research, it is essential to align it with the latest global advancements. On the other hand, a significant portion of research is conducted by students, who must possess the necessary capabilities. However, due to a lack of sufficient knowledge and skills, ignorance of ethical principles, challenges posed by these tools, dependency on them, and the diminishing role of the researcher, as well as barriers to conducting research that relate to individual factors, it is essential to use this technology within a structured educational program emphasizing the role of the researcher. In this regard, no studies were found based on the conducted search, and most studies focused on education rather than research. Therefore, the present study is novel in its application of a theoretical framework and the use of the latest supportive tools in conducting research. The aim of this research was to develop a program for utilizing artificial intelligence in research based on Bandura's social-cognitive theory.
Objectives
The aim of this research was to develop and validate an artificial intelligence training program in research based on social-cognitive theory.
Methods
The present study was qualitative in nature, with applied-developmental objectives. Given the lack of available valid tools prior to conducting quantitative work, the researcher proceeded with intervention design and exploration. Additionally, due to the novelty of using generative artificial intelligence in research and the absence of validated intervention programs on one side, and the unique expertise of specialists from the fields of psychology and computer science on the other, this research initially began with a Delphi study to gather opinions from experts in these two domains. The Delphi method aims to achieve reliable consensus among specialists and experts through a process involving repeated questionnaires and controlled feedback (42). This flexible and systematic approach is utilized when the issue is complex, new, and has limited evidence, allowing experts to anonymously collaborate and reach a consensus on the matter (43). Typically, two to three rounds of Delphi are sufficient to achieve consensus (44), and literature recommends having 10 to 18 participants in each panel, with one of the requirements being the selection of qualified experts (45). Among the advantages of this method are providing feedback on the collected information to each member, allowing each member to revise their judgments (46), and obtaining rich information due to the repetition and review of responses (45). In this study, the classic Delphi design was employed, characterized by the anonymity of participating experts, data collection through successive rounds, feedback provision to members, and group analysis of responses (47). The steps of the study can be seen in Figure 1.
Phase One (Delphi Study)
This research was conducted with the ethical code IR.UM.REC.1403.277, which was conducted in the first phase based on the Delphi design. Initially, a targeted number of 18 computer science specialists and 15 psychology specialists were invited. The specialists invited were from Ferdowsi University of Mashhad, Birjand University, the University of Tehran, and Shahid Beheshti University in Tehran. Among them, professors from Ferdowsi University of Mashhad and Birjand University accepted the invitation to participate in the research.
The criteria for selecting specialists included holding at least a PhD in computer science and training in the application of AI-based tools (for the computer science group) and holding a PhD in psychology or educational psychology with familiarity and teaching experience in social-cognitive theory (for the psychology group), being a faculty member in the same field at a public university, willingness to participate in the research, and availability. Subsequently, personalized invitations were presented to each faculty member in person, tailored to their respective fields and specialties, detailing the objective and topic, the required duration, the number of potential rounds, and signed by the research team. For those professors located in other cities, the invitations were sent via email. Among the invitees, 13 faculty members from the computer science group, including 4 professors, 8 associate professors, and 1 assistant professor, as well as 12 faculty members from psychology, including 1 professor, 2 associate professors, and 9 assistant professors, accepted the invitation and entered the Delphi study. It is worth noting that the reason for selecting computer science specialists was that the main focus of the work was the use of artificial intelligence, which falls under the domain of computer science. The reason for choosing specialists from the fields of psychology and educational psychology instead of cognitive psychology or psychometrics is that social-cognitive theory is a comprehensive theory within psychology. Moreover, in terms of education, the greatest use and application of this theory is with educational psychology. On the other hand, cognitive psychology focuses more on memory, attention, and similar areas, while the present study is centered on motivation, and the aim of this research was not to design a questionnaire or similar items, making the presence of psychometric specialists unnecessary.
Initially, in the first round, a questionnaire with open-ended questions was provided to the members. This questionnaire was tailored to the specialties of the two different groups, such that the questionnaire for the computer science specialists contained 6 open-ended questions, while the psychology specialists had 1 open-ended question regarding the content of the training and how to present this content (Table 1). The questionnaires were sent to the university emails of all specialists, but responses were collected according to the convenience of each specialist, either via email, structured in-person interviews, or voice messages in messaging apps (one computer science specialist).
It is worth noting that the interviews and voice messages were transcribed in writing. The reason for having 6 questions for the computer science group, as opposed to just one for the psychology group, is that various AI-based tools are continuously advancing, and new tools are emerging. Therefore, familiarity with, utilization, and application of these tools may differ among members of the computer science group. Additionally, given the unique knowledge of specialists in this field, who are more informed about the foundational basis of these tools than any other group, more detailed questions regarding the application of these tools in various aspects of research were deemed necessary.
However, for psychology specialists, one question seemed sufficient since it pertained to social-cognitive theory, with which they were fully familiar, and this was evident in the lack of need for a third round of Delphi for this group.
Table 1. Questions from the first round for specialists in computer science and psychology
|
Field of Specialists |
Questions from the First Round |
|
Computer |
1. In your opinion, in what areas (such as topic selection, abstract, introduction, background, quantitative/qualitative research methods, findings, discussion, conclusion, plagiarism, etc.) can artificial intelligence assist graduate students in a research project, including writing articles or theses? 2. What content and materials do you believe should be taught in various sections of research work for training in the use of artificial intelligence? 3. What tools, software, or AI chatbots do you recommend for teaching this content? 4. For each of the tools mentioned in your previous answer, for which components 5. In your opinion, how or through what means can the necessary content be taught 6. What important aspects should be considered for an artificial intelligence training |
|
Psychology |
1. In your opinion, how should an educational program using AI tools be designed to enhance |
Subsequently, in the second round, the responses obtained from the specialists were summarized and compiled with further explanations, and a questionnaire with a 7-point Likert scale (ranging from strongly agree to strongly disagree) was provided again to the specialists to indicate their level of agreement with the questionnaire that was formulated based on the analysis of the responses from the first round. At this stage, 10 individuals from each field completed and returned the questionnaires. The reduction of 3 and 2 participants, as noted through follow-up, was primarily due to a lack of sufficient time and the specialists' emphasis on the knowledge and skills of the supervising and advising professors. The responses collected in the second phase were analyzed, and the upper median of the responses was considered as the consensus among the specialists. Items that were very close to the median or additional items suggested by specialists at this stage were resent to the specialists with explanations and reasons. This process continued in the same manner until consensus was reached among the specialists, thus achieving agreement among the computer science specialists after 3 rounds and among the psychology specialists after 2 rounds.
Phase Two (Protocol Development and Validation)
Based on the results obtained from the previous phase and the key themes derived from the theoretical foundations (19), a protocol consisting of 5 sessions of 90 minutes each (and extending to 120 minutes if necessary) was developed and presented to 14 specialists from the psychology and computer science groups, who were purposefully selected for validity assessment. The criteria for selecting specialists from the psychology group included holding at least a PhD in psychology or educational psychology and having at least 3 years of teaching experience in social-cognitive theory. For the computer science group, the criteria included holding at least a PhD in computer science with a focus on artificial intelligence and having a minimum of 6 years of teaching experience in theoretical and practical aspects of artificial intelligence. After selecting the specialists, a validity form was sent, which included definitions, objectives, methods for developing the intervention, implementation procedures, assignments, a summary of session content, and the duration of each session, enabling them to assess face validity and provide their comments and suggestions, as well as evaluate the content validity of the protocol using the CVR index to assess the necessity of the content and the CVI index to evaluate the relevance, simplicity, and clarity of the content. Following a one-month follow-up, 10 specialists completed and returned the form, of whom 8 (80%) were male and 2 (20%) were female. Additionally, among this group, 4 (40%) were associate professors and 6 (60%) were assistant professors from five different universities.
Results
The results of the present study in the first round of Delphi were summarized based on the responses of faculty members to the open-ended questions, which were categorized into two different groups. The demographic information of the Delphi members is presented in Table 2.
Table 2. Demographic information of Delphi project experts
|
Field name |
Delphic rounds |
Number of participants |
Role |
Gender |
|
Computer |
First |
13 people |
Professor (4 people) |
Male |
|
|
|
|
Associate Professor (8 people) |
Male |
|
|
|
|
Assistant Professor (1 person) |
Male |
|
|
Second |
10 people |
Professor (2 people) |
Male |
|
|
|
|
Associate Professor (7 people) |
Male |
|
|
|
|
Assistant Professor (1 person) |
Male |
|
|
Third |
10 people |
Professor (2 people) |
Male |
|
|
|
|
Associate Professor (7 people) |
Male |
|
|
|
|
Assistant Professor (1 person) |
Male |
|
Psychology |
First |
12 people |
Professor (1 person) |
Male |
|
|
|
|
Associate Professor (2 people) |
Male |
|
|
|
|
Assistant Professor (9 people) |
Male (6 people)/Female (3 people) |
|
|
Second |
10 people |
Associate Professor (1 person) |
Male |
|
|
|
|
Assistant Professor (9 people) |
Male (6 people)/Female (3 people) |
Based on the responses obtained from 13 specialists in the computer science group, a questionnaire containing 28 items on educational content, 14 items on application and permissible use, 5 items on limitations and principles of use, and 32 AI-based tools including chatbots and similar resources was designed. This questionnaire was provided to the members in the second round to indicate their level of agreement on a 7-point Likert scale. The choice of a 7-point scale was intended to increase the range of options and the accuracy of the specialists' responses, as this scale is often preferred over 9-point or 5-point scales in research (47, 48). It is important to note that the Kendall's coefficient of concordance is used for ranked data; thus, given that the data collected in this research utilized a Likert scale, which is an ordered categorical measurement level, there was no need to calculate Kendall's coefficient. Additionally, most studies have not calculated this coefficient, and the commonly used criterion for evaluations has often been the median (49). The reason for not using the interquartile range was that, according to the formula for this index, the difference between the third and first quartiles for an item scored as 7 and 6 is the same as for an item scored as 3 and 2. This indicates that while the first item shows high agreement among specialists regarding its usefulness, the other item indicates high agreement among specialists regarding its lack of usefulness, thus making this index unsuitable. Furthermore, studies have stated that there is still no agreed-upon guideline in this regard (50).
Among the specialists, 10 completed and returned the questionnaire. By analyzing the data obtained from the questionnaire using SPSS-26 software, the median scores for each item were calculated, and a median score of 5.5 or higher was set as the criterion for selecting each item. Based on this criterion, a total of 13 general items and 5 tools were eliminated due to discrepancies. Among the tools, only 11 were definitively agreed upon by most specialists, while 16 others were slightly below the criterion score. Additionally, 3 new tools were suggested by specialists in the comments section of the second questionnaire. Therefore, a third round was conducted to reach consensus among specialists for a total of 19 tools mentioned, with the reasons for their selection provided by the members. Based on the results from this round, 4 tools reached the required threshold for approval and were included in the main plan. Among the summarized remaining points based on the opinions of the computer science specialists were the need to create a proper culture regarding artificial intelligence as a supportive tool for researchers rather than a replacement, the necessity of review and validation, general training on artificial intelligence concepts and the concept of chatbots and language models, showcasing examples of errors made by these tools, the necessity for professional prompt writing, the complete acceptance of ethical and content responsibility for what is derived from these tools, critiquing AI responses, introducing general and specialized tools, setting limits on their use, emphasizing their role as guides in working with data analysis software rather than conducting the analysis themselves, the necessity of having a supervising professor with knowledge and experience alongside the student for topic selection, and the usefulness of these tools for translation and correcting the linguistic structure of texts, along with the need for clarification on how and to what extent they should be used and the policies of reputable global journals.
A similar process was carried out for the psychology specialists. Based on the opinions provided by 12 psychology specialists in response to the questions from the first round, a total of 49 items were extracted and then presented in a questionnaire format in the second round. Based on the median criterion of 5.5 or higher and from the returned questionnaires from 10 specialists in this field, a total of 5 items were eliminated due to discrepancies, while the remaining items with higher scores were accepted, thus negating the need for a third Delphi round for this group. Among the summarized remaining points based on the opinions of the psychology specialists were the need for step-by-step, flexible, and personalized training, the necessity of planning and goal-setting for conducting research work, presenting short documentary and video clip, leveraging the experiences of capable role models in this field, the necessity of practical exercises for what has been taught, assessing student needs, providing examples and addressing issues, the necessity of motivational statements from the instructor and their verbal encouragement, the instructor's competence in this field, comparing a sample written by the researcher and AI and identifying similarities and differences, and discussing the applications of these tools at each stage of the research.
The tools that reached consensus in the final stage included Chat GPT, DeepSeek, Gemini, Copilot, Semantic Scholar, Scopus AI, Connected Paper, Perplexity, ResearchRabbit, SciSpace/Typeset, NotebookLM, Grammarly, iThenticate, GPTZero, and Scite AI. The discussions included specialists' opinions on which stages of the research process these tools would be most beneficial, as well as the distinction that general tools are more suitable for broader tasks while specialized tools offer greater capabilities for specific sections.
It is important to note that the reason for selecting a score of 5.5 as the criterion was that the median score of 7 is 4; however, most of the responses obtained were above 5. Additionally, in the computer science group, since 16 tools received a median score of 5, training on all 16 tools would have been somewhat impractical. Thus, efforts were made to select tools with greater consensus among the specialists. However, tools with a median score of 5 were not discarded; rather, they were presented to the specialists in the third round to select the best among them for inclusion in the research. Moreover, studies have shown that an appropriate percentage of agreement for Delphi studies falls within the range of 70% to 80% (50, 51), with around 75% being ideal (52). This percentage of agreement, considering the 7-point scale, is greater than 5 and less than 6, making the median selected in this research appropriate. For the psychology group, due to the greater consensus among the professors, a third Delphi round was not necessary. The questionnaires obtained from the Delphi rounds can be found in Appendices A.1, A.2, and B.1. Additionally, the process of maintaining, eliminating, and adding points obtained from the Delphi study is illustrated in Figure 2.
In the following, based on the results and insights obtained from the specialists in the Delphi study, as well as a review of the foundations of social-cognitive theory, a protocol consisting of 5 sessions of 90 minutes each was developed, as shown in Table 3.
As mentioned, the computer science faculty were asked about the content (What), while the psychology faculty were asked about the teaching methods (How). Thus, the integration of results from both groups was structured such that the content and tools suggested by the computer science specialists were included in the sessions, and the presentation methods were tailored to empower the researcher based on the psychology specialists' feedback. Furthermore, there were no significant disagreements between the specialists, except regarding the tools, which is why consensus was reached after three rounds for the computer science specialists and two rounds for the psychology specialists. It is also noteworthy that the integration of results was conducted by the research team, composed of an associate professor in educational psychology and an associate professor in computer science, along with a master's student in educational psychology, a member of the National Elite Foundation, and holding a certificate for over 30 hours of training on the use of AI tools in research, offered by the most reputable universities in the country. Additionally, no specific contradictions were observed in the integration of opinions between the computer science and psychology specialists, with similar points such as simple and step-by-step training, practical exercises, personalized programs, the necessity of required research skills, showcasing AI errors, usage cases, and limits being evident.
Table 3. Intervention developed based on the results of the Delphi study and theoretical foundations
|
Objectives |
Content |
Assignments |
Implementation Method |
|
Session 1: Introduction to Artificial Intelligence in Research and How to Enter It |
Introduction, definition of artificial intelligence and its evolution, brief explanation of key AI concepts, discussion of the requirements and necessity of using AI in line with global scientific growth, advantages of using AI in reducing time wastage and aiding scientific development, applications of AI, short video or documentary on the importance of research, sharing experiences from professors and students, assessing participants' knowledge about working with AI and specifically chatbots, discussing necessary research skills, the importance of self-identifying strengths and weaknesses, emphasizing the need for sufficient knowledge in one's academic field, how to access and use AI, particularly chatbots, principles of effective prompt writing, and the role of AI tools in knowledge management through organization and summarization. |
Accessing AI tools, especially chatbots, and using them, writing a professional prompt |
Providing examples, practical application of all discussed items, verbal encouragement, evoking positive emotions, addressing questions, predicting outcomes, feedback from the instructor and the chatbot, setting deadlines until the next session, presenting new and reliable information, introducing session tools and their capabilities and limitations. (Explanatory method and group discussion, Q&A, video screening, and practical exercises) (Chat GPT, DeepSeek, Gemini, Copilot) Key Components: Observational learning, Mastery experience, Vicarious experience, Verbal persuasion; Forethought and Self-observation; verbal |
|
Session 2: Training on Designing a Personalized Program for Conducting Research, Using AI Tools and Chatbots to Identify Research Gaps and Select Topics and Titles |
Review of the previous session, analysis of assignments (quality, quantity, etc.), creating a planning table and setting SMART goals, prioritizing short-term and long-term goals, the importance of topic and title selection in research and the role of chatbots in this process, familiarization with leading experts in the relevant scientific field and networking, identifying research gaps with chatbots and determining content and time allocation for each topic, clearly defining standards. |
Each individual will create a planning table using AI and chatbots for their research project, exploring topics within their field and identifying at least three new topics through AI, and identifying prominent experts in that area. |
Providing examples, practical application of all discussed items, verbal encouragement, evoking positive emotions and anxiety-reducing strategies, addressing questions, predicting outcomes, feedback from the instructor and the chatbot, setting deadlines until the next session, presenting new and reliable information. (Explanatory method and group discussion, Q&A, and practical exercises) (Chat GPT, DeepSeek, Gemini, Copilot, Semantic Scholar) Key Components: Observational learning, Mastery experience, Verbal persuasion; Forethought, Self-Observation, Goal-setting and Performance; Verbal instruction and Feedback |
|
Session 3: Training on Smart Searching Skills, Finding Articles, and Writing Research with AI Tools (Identifying Strategies, Solutions, and Recommended AI Software) |
Review of the previous session, discussing the necessity and components of the introduction and background, how to gather relevant articles, examining their comprehensiveness and validity, verification, paraphrasing and rewriting, translating, organizing, and summarizing them using AI tools. |
Finding articles related to the topics selected in previous sessions, translating, summarizing, rewriting, and organizing them through AI. |
Providing examples, practical application of all discussed items, verbal encouragement, evoking positive emotions, addressing questions, predicting outcomes, feedback from the instructor and the AI, verification and referencing, showcasing examples of AI errors, setting deadlines until the last session, presenting new and reliable information, introducing session tools and their capabilities and limitations. (Explanatory method and group discussion, Q&A, and practical exercises) (Scopus AI, Connected Paper, Perplexity, ResearchRabbit, Semantic Scholar, SciSpace/Typeset) Key Components: Observational learning, Mastery experience, Verbal persuasion, Performance and Self-judgment; Verbal instruction and Feedback |
|
Session 4: Training on Using AI Tools in Methodology, Designing Instruments, and |
Review of the previous session, using AI tools to monitor the designed program from previous sessions and their application in methodology and discussion/conclusion, monitoring the progress based on individual plans, using AI in quantitative and qualitative methodologies, creating charts |
Interacting with AI tools to select a quantitative or qualitative method relevant to the topic and research work, understanding how to work with software for each method, providing a small dataset for practice, using feedback tools, designing a 10-item questionnaire. |
Providing examples, practical application of all discussed items, verbal encouragement, evoking positive emotions, addressing questions, predicting outcomes, feedback from the instructor and the AI, verification and referencing, showcasing examples of AI errors, providing new and reliable information, introducing session tools and their capabilities and limitations, and introducing tracking tools and setting alerts to monitor completed activities. (Explanatory method and group discussion, Q&A, Key Components: Observational learning, Mastery experience, Verbal persuasion, Performance and |
|
Session 5: Training on Using AI Tools in Source Review, Work Evaluation, Plagiarism Detection, |
Review of the previous session, correcting linguistic structure, ethical principles required with the introduction of AI, detecting plagiarism and AI-generated texts, reviewing and analyzing the policies of reputable journals worldwide, usage cases, how to use and limits of use, providing a sample abstract and problem statement by individuals and AI tools, examples of unintentional plagiarism, mental imagery, emphasizing the culture of proper use of AI tools, stressing the complete acceptance of the use of |
Writing a short text both by participants and AI tools and reviewing its validity, creating a short presentation file. |
Providing examples, practical application of all discussed items, verbal encouragement, evoking positive emotions, addressing questions, predicting outcomes, feedback from the instructor and the AI, providing new and reliable information, introducing session tools and their capabilities and limitations. (Explanatory method and group discussion, Q&A, and practical exercises) (NotebookLM, Grammarly, iThenticate, SciSpace/Typeset, GPTZero, Scite AI) Key Components: Observational learning, Mastery experience, Verbal persuasion; Performance, Self- reflection, and |
This protocol, along with the content validity form, was sent to 14 specialists from both the computer science and psychology groups to evaluate and provide their opinions on its face and content validity. The demographic information of the specialists at this stage is presented in Table 4.
In total, 10 specialists returned the submitted forms, and the results obtained from the CVR and CVI index calculations for each session are shown in Table 2. As observed, the content validity ratio (CVR), according to Lawshe's criterion (53), which is 0.62 for 10 specialists, was high in all sessions except for the fourth session. It is worth noting that two specialists who evaluated this session as useful were from the AI group and stated that research methods vary by discipline, and in computer science, which relies on simulation, the quantitative and qualitative methods used in the humanities do not apply.
Additionally, according to one specialist's comments on this session, methodology should be derived from articles, and AI should merely serve as an intermediary rather than a provider of methods, thus data should be handled by human intelligence and the researcher according to the specific methodology required for that research. Therefore, given the explanations provided and the fact that there is only a 0.02 difference from the main index, this can be considered somewhat acceptable. Moreover, according to Waltz and Bausell (54), the acceptable value for the CVI index should be above 0.8, and as shown in Table 5, all values exceeded this threshold, indicating that the content across all sessions was relevant, simple, and clear.
Table 4. Demographic information of specialists in the validation phase
|
Field name |
Number of Participants |
Role |
Gender |
|
Computer |
4 |
Associate Professor |
Male |
|
Psychology |
6 |
Assistant Professor |
Male (4), Female (2) |
Table 5. Results of the content validity ratio (CVR) and content validity index (CVI) study
|
Session number |
CVR Index |
Index CVI |
||
|
Necessity |
Communication |
Simplicity |
Resolution |
|
|
First |
1 (10 essential) |
1 |
1 |
1 |
|
Second |
0.8 (9 essential and 1 useful) |
1 |
1 |
0.9 |
|
Third |
0.8 (9 essential and 1 useful) |
1 |
1 |
1 |
|
Fourth |
0.6 (8 essential and 2 useful) |
0.9 |
1 |
1 |
|
Fifth |
0.8 (9 essential and 1 useful) |
1 |
1 |
1 |
Discussion
Within the framework of social-cognitive theory, and emphasizing its key concepts this theory (19), an educational program in the field of research was developed. In this process, insights from computer science specialists regarding the best current artificial intelligence tools to assist in conducting research and their proper and principled use were utilized. Additionally, feedback from psychology specialists was incorporated to better design this program in accordance with the concepts of social-cognitive theory, merging all viewpoints from both groups of specialists with scientific foundations. Thus, this educational program aimed to encompass the core and significant concepts of social-cognitive theory, such as reciprocal determinism, observational learning, sources of self-efficacy, and stages of self-regulation.
These concepts can be effectively integrated into a workshop environment that provides access to systems for practical training, articulates the importance and advantages of conducting research in general, and discusses the benefits and challenges of using AI tools in research tasks. The program emphasizes building self-efficacy through experience and practical exercises by participants, addressing questions and providing feedback from instructor, aligning with what the instructor presents or demonstrates, and observing examples that the instructor performs. It also highlights the insufficiency of self-belief and the necessity of having a well-structured plan and goal-setting for timely research completion using these tools. The program includes strategies for achieving these goals, such as translation, rewriting, finding relevant articles, and creating concept maps of article content through AI tools, monitoring completed tasks using these tools, and allowing participants to evaluate themselves after participating in the program through descriptive responses, short answers, or oral assessments. This program was designed in five 90-minute sessions and was conducted in a group format, including in-session assignments and the use of tools, as well as tasks outside of the sessions. Based on the literature review, no study was found that designed and implemented an educational program in this manner specifically in the field of research, thus precluding a comparison of the current program's alignment with other similar studies.
However, studies abroad have explored the training of AI tool usage within the framework of self-regulation according to social-cognitive theory, particularly in areas such as academic progress and education rather than research, which aligns with the present program. For instance, Huang and colleagues (55) conducted an experimental study with 75 incoming programming students to investigate the impact of using an AI tool in an experimental group compared to a regular teaching assistant in aiding the performance of students in the control group. They designed their training program based on the need for incoming students to develop self-regulation according to Zimmerman’s framework (56), which is rooted in social-cognitive theory. Initially, in the Forethought stage, they emphasized goal-setting and planning, and in the Performance stage, they taught strategies such as note-taking, highlighting, and diagramming using the AI tool, as well as personalized responses from the tool, displaying examples, and practicing. In the Self-reflective stage, they summarized concepts and completed assignments. Their research results indicated that the group utilizing the AI tool not only achieved higher scores in the coding course compared to the other group but also experienced reduced cognitive load and improved self-regulation, particularly in terms of intrinsic goal orientation and task value related to motivation, and metacognitive self-regulation, leading to deeper engagement in their programming activities.
Another study highlighted the necessity of integrating new technologies in blended classrooms and examined the impact of these tools on self-regulated learning and higher-order thinking skills among students. This study utilized Chat GPT for its extensive information resources and immediate feedback, but due to challenges such as learner dependency on these tools, the educational program was designed and implemented based on self-regulation stages to enhance students' problem-solving skills. In the first week, they focused on goal-setting and learning planning according to the first stage of Zimmerman’s self-regulation framework. In weeks two through nine, students were encouraged to use these tools when faced with problems and challenges and to engage in group discussions. Week ten was dedicated to self-reflection, which involved recalling concepts learned throughout the course. To assess their results, they designed a GCLA tool that guides learners to derive answers themselves rather than providing direct responses like Chat GPT. Their findings showed that both tools enhanced self-regulation and higher-order thinking skills, but the newly designed tool significantly increased self-regulation, particularly in terms of self-efficacy, intrinsic motivation, cognitive and behavioral engagement, while reducing disengagement. It also promoted higher-order thinking, especially critical thinking and problem-solving (57).
Research has also been conducted within the country, although these studies have not specifically focused on the impact of artificial intelligence or on research itself. However, they are mentioned here because the educational programs they utilized were based on social-cognitive theory. For example, Badakhshian and colleagues (58) demonstrated in their study that their rehabilitation program, grounded in social-cognitive theory, was able to enhance the
self-efficacy of individuals with spinal cord injuries. In their educational program, after highlighting the importance of this issue, they employed modeling through familiarization and interviews with a successful individual with this injury, positive feedback, motivational video screenings, goal-setting, identifying challenges and obstacles, and teaching necessary skills. Additionally, a study involving 60 students examined the impact of counseling based on this theory, focusing on perceptions of academic self-efficacy. The intervention implemented in this research addressed past experiences and their consequences, positive role models for vicarious reinforcement, and similar topics. The results indicated an increase in self-efficacy within the experimental group (59).
Hashemian and colleagues (60) also investigated the impact of an intervention based on this theory on increasing physical activity among 246 learners. They applied the constructs of this theory in their intervention sessions. For instance, in the knowledge discussion, they defined and explained the importance of physical activity; in the expected outcomes and its value discussion, they elaborated on the effects of physical activity. They utilized planning and goal-setting for self-regulation and modeling for self-efficacy. They also incorporated observational learning, another prominent component of this theory, through video clips and participant involvement in the program. Their research results indicated that, in addition to improving physical activity, the self-efficacy and self-regulation of the members of the experimental group also experienced significant increases.
Regarding the validity of the educational program utilizing AI tools in research based on social-cognitive theory, and based on the feedback from specialists in both computer science and psychology, the results from the two content validity indices and the content validity ratio indicated that all sessions possessed desirable content validity. In other words, the computer and psychology specialists deemed the training package necessary, relevant, simple, and clear. This aligns with the psychology specialists' views on the consistency of session content with the framework of social-cognitive theory and its concepts (19), in combination with research, as well as the computer specialists’ perspective on the appropriate use of the best available tools during this time frame. However, the results for the fourth session were somewhat different. It's noteworthy that two specialists who evaluated this session as useful were from the AI field and stated that research methods vary by discipline, and in computer science, which is simulation-based, the quantitative and qualitative methods used in the humanities do not apply. Furthermore, one specialist noted in their comments on this session that “methodology should be extracted from articles, and AI should merely serve as an intermediary, not a provider of methods. Therefore, data must be handled by human intelligence and by the researcher according to the specific methodology required for that research.” Overall, given the explanations provided and the fact that there was only a 0.02 difference from the main index, this can be considered somewhat acceptable. As mentioned, there were no similar studies to compare the alignment of the current program with them; however, this program is consistent with Bandura’s social-cognitive theory and its concepts (19), and it aligns with other somewhat similar programs
(55, 57). This program aimed to innovatively combine the important concepts of social-cognitive theory in the realm of research and to appropriately leverage the most significant technology of the day, which is artificial intelligence. The integration of insights from computer science specialists, who have greater knowledge of this emerging technology and its underlying structures, capabilities, and limitations, enhances the value of the content. However, despite its value, this content needs to be presented and taught correctly, maintaining the prominent role of the researcher and enhancing their motivation. This underscores the importance of the perspectives of psychology and educational psychology specialists, who are more connected to motivation and education, thereby highlighting the need for a practical, structured, and interdisciplinary program.
It is important to note that this research faced several limitations, including the lengthy and time-consuming nature of the Delphi process, which led to decreased participation from faculty members as the number of rounds increased and fatigue set in. Additionally, the AI tools available in the intervention may change over time, potentially being replaced by more advanced tools, but usually this progress includes all tools to some extent.
The gender composition of the Delphi participants, which was predominantly male, also posed a limitation. Furthermore, given that the nature of the Delphi process relies on consensus among specialists, the opinions of some experts, which were significant but did not align with others, may have been overlooked. The lack of familiarity of some members with certain tools proposed by other specialists and their selection of the "Neither agree nor disagree" option can also be considered a limitation. Another limitation was the reduction in the number of specialists in the Delphi rounds, which somewhat decreased the stability of responses. However, since the minimum required number of specialists was maintained, with 10 specialists remaining from each field, this was deemed acceptable. Although the overall process of conducting research is somewhat common across different disciplines, the specific requirements of each field should be adapted accordingly. Therefore, the use of the program, especially in the fourth session for engineering disciplines, should be approached with caution. Additionally, the number of specialists in the final round was 10 for each field, and it is recommended that this research be repeated with a larger number of specialists. Moreover, this study only developed an intervention, and it is essential to evaluate its effectiveness on self-regulation, self-efficacy, and research engagement among students. Implementing the developed program with doctoral students and comparing the results with those of master's students is also suggested. Developing an educational program based on other significant psychological theories, such as self-determination theory, and comparing it with the social-cognitive program of this study is another recommendation. It is also necessary to ensure that the content is updated according to the latest versions of the tools for any future implementation of the program. Higher education institutions should provide educational units for using AI-based tools in research for graduate students or arrange practical workshops for them. Establishing regulations for ethical compliance and precise disclosure of the use of these tools is also essential. Conducting structural equation modeling studies, particularly case studies or systematic reviews regarding the use of AI-based tools in research and studies conducted in this area, is another subsequent recommendation.
Conclusion
This research introduced a validated educational program for using artificial intelligence based on social-cognitive theory, aimed at enhancing self-regulation, self-efficacy, and research engagement among graduate students. The program sought to demonstrate the framework for the correct and principled use of contemporary technologies to graduate students, while also contributing to empowering the dimensions of Bandura's reciprocal determinism, emphasizing the role of the individual and, in general, empowering the research capabilities of graduate students. Furthermore, given the increasing proliferation of artificial intelligence tools with diverse capabilities, another important advantage of this study was the identification of a limited set of necessary and practical tools—recommended by leading experts in the field—that assist researchers while preventing overload and confusion amid the vast array of available options Implementing and comparing the current program across different levels of graduate education or developing a program based on another psychological theory and comparing it with this program will further contribute to the knowledge in this field.
- Sarker IH. AI-Based Modeling: Techniques, Applications and Research Issues Towards Automation, Intelligent and Smart Systems. SN Comput Sci. 2022;3(2):158. doi: 1007/s42979-022-01043-x. PMID: 35194580 PMCID: PMC8830986
- Keykha A, Mohammadi H, Darabi F, Hosseini SS. Identifying the Applications of Artificial Intelligence in the Assessment of Medical Students. Strides in Development of Medical Education. 2025; 22(1): 1-18. doi: 22062/sdme.2025.200833.1512.
- Ortega-Ochoa E, Pérez JQ, Arguedas M, Daradoumis T, Puig JMM. The effectiveness of empathic chatbot feedback for developing computer competencies, motivation, self-regulation, and metacognitive reasoning in online higher education. Internet of Things. 2024; 25: 101101. doi: 1016/j.iot.2024.101101.
- Creswell JW, Guetterman TC. Educational research: Planning, conducting, and evaluating quantitative and qualitative research. New Jersey: Pearson; 2019.
- Cone JD, Foster SL. Dissertations and theses from start to finish: Psychology and related fields. 2nd ed. Washington, DC: American Psychological Association. 2006.
- Dupont S, Meert G, Galand B, Nils F. Postponement in the completion of the final dissertation: An underexplored dimension of achievement in higher education. European Journal of Psychology of Education. 2013; 28(3): 619–39. doi: 1007/s10212-012-0132-7.
- Akour MM, Damra JK, Al Ali TM, Ghaith SM, Ghbari TA, Shammout NA. Validation of the revised scale of students’ attitudes towards research. Studies in Higher Education. 2023; 49(1): 33–46. doi:1080/03075079.2023.2220700.
- Heidari-Soureshjani S, Yarmohammadi-Samani P, Mohammadian-Hafshejani A, Gholipour Mofrad-Dashtaki D. Investigating the Relationship between Research Anxiety and Academic Self-Concept in Master’s and Doctoral Students. Strides in Development of Medical Education, 2022; 19(1): 175-8. doi: 22062/sdme.2022.196957.1092.
- Casanave CP, Hubbard P. The writing assignments and writing problems of doctoral students: Faculty perceptions, pedagogical issues, and needed research. English for Specific Purposes. 1992; 11(1): 33-49. doi:1016/0889-4906(92)90005-U.
- Pruskil S, Burgwinkel P, Georg W, Keil T, Kiessling C. Medical students' attitudes towards science and involvement in research activities: a comparative study with students from a reformed and a traditional curriculum. Med Teach. 2009;31(6):e254-9. doi: 1080/01421590802637925. PMID: 19811157
- Salehi M, Saeedi P, Jabbari N, Kazemi-Malek Mahmoudi M, Kazemi-Malek Mahmoudi Sh. Obstacles to conducting research activities at the university from the perspective of students at Golestan University of Medical Sciences. Educational Development of Judishapur. 2016; 7(1): 84-93. [In Persian]
- Sobczuk P, Dziedziak J, Bierezowicz N, Kiziak M, Znajdek Z, Puchalska L, et al. Are medical students interested in research? - students' attitudes towards research. Ann Med. 2022; 54(1):1538-47. doi: 1080/07853890.2022.2076900. PMID: 35616902 PMCID: PMC9891220
- Mahmoodi F, Beheshti H. Survey of the Medical Sciences Students’ Attitude Towards Research. Strides Dev Med Educ. 2024; 21(1):218-25. doi:22062/sdme.2024.199683.1368.
- Behrooz H, Lipizzi C, Korfiatis G, Ilbeigi M, Powell M, Nouri M. Towards Automating the Identification of Sustainable Projects Seeking Financial Support: An AI-Powered Approach. Sustainability. 2023; 15(12): 9701. doi: 3390/su15129701.
- Sáiz-Manzanares MC, Marticorena-Sánchez R, Martín-Antón LJ, González Díez I, Almeida L. Perceived satisfaction of university students with the use of chatbots as a tool for self-regulated learning. Heliyon. 2023; 9(1):e12843. doi: 1016/j.heliyon.2023.e12843. PMID: 36704275 PMCID: PMC9871218
- Fiedler A, Döpke J. Do humans identify AI-generated text better than machines? Evidence based on excerpts from German theses. International Review of Economics Education. 2025; 49: 100321. doi: 1016/j.iree.2025.100321.
- Melliti M. AI in MA thesis writing: The use of lexical patterns to study the ChatGPT influence. TESOL International Journal. 2024; 6 (3); 58-76. doi: 58304/ijts.20240305.
- Dangprasert S, Kamtab P. Development of an AI-Powered System for Thesis Advisor Consultations. International Journal of Information and Education Technology. 2025; 15(6): 1134-43. doi: 18178/ijiet.2025.15.6.2316.
- Bandura A. Social foundations of thought and action: A social cognitive theory. In: Marks D. The Health Psychology Reader. New Jersey: Prentice-Hall, Inc.; 1986:617
- Zimmerman BJ. A social cognitive view of self-regulated academic learning. Journal of Educational Psychology. 1989; 81(3): 329. doi: 1037//0022-0663.81.3.329.
- Bandura A. Self-efficacy: The exercise of control. 1nd ed. New York: W. H. Freeman and Company; 1997: 604.
- White B, Frederiksen J, Collins A. The interplay of scientific inquiry and metacognition: More than a marriage of convenience. In: Hacker DJ, Dunlosky J, Graesser AC. Handbook of metacognition in education. 1nd ed. England: Routledge; 2009: 175-205.
- Koohi M, Kareshki H, Mahram B. Investigating the role of Educational Groups and Academic Degrees at Research
Self-Regulation of postgraduate Students. Educational Research. 2019; 6(38): 86-106. doi: 52547/erj.6.38.86. [In Persian] - Salehi M, Kareshki H, Ahanchian M. Testing the causal model of the role of social cognitive factors affecting research self-efficacy of doctoral students. Iranian Higher Education. 2014; 5(3): 59-83. [In Persian]
- Forester M, Kahn JH, Hesson-McInnis MS. Factor Structures of Three Measures of Research Self-Efficacy. Journal of Career Assessment. 2004; 12(1): 3–16. doi: 1177/1069072703257719.
- Vekkaila J. Doctoral student engagement - The dynamic interplay between students and scholarly communities. (dissertation). Helsinki: University of Helsinki; 2014: 110.
- Saqr M, Cheng R, López-Pernas S, Beck ED. Idiographic artificial intelligence to explain students' self-regulation: Toward precision education. Learning and Individual Differences. 2024; 114: 102499. doi:1016/j.lindif.2024.102499.
- Jin SH, Im K, Yoo M, Roll I, Seo K. Supporting students’ self-regulated learning in online learning using artificial intelligence applications. International Journal of Educational Technology in Higher Education. 2023; 20(1): 37. doi: 1186/s41239-023-00406-5.
- Wei L. Artificial intelligence in language instruction: impact on English learning achievement, L2 motivation, and self-regulated learning. Front Psychol. 2023; 14: 1261955. doi: 3389/fpsyg.2023.1261955. PMID: 38023040 PMCID: PMC10658009
- Yilmaz R, Yilmaz FGK. The effect of generative artificial intelligence (AI)-based tool use on students' computational thinking skills, programming self-efficacy and motivation. Computers and Education: Artificial Intelligence. 2023; 4: 100147. doi: 1016/j.caeai.2023.100147.
- Nazari N, Shabbir MS, Setiawan R. Application of Artificial Intelligence powered digital writing assistant in higher education: randomized controlled trial. Heliyon. 2021; 7(5):e07014. doi: 1016/j.heliyon.2021. PMID: 34027198 PMCID: PMC8131255
- Naseer F, Khan MN, Tahir M, Addas A, Aejaz SH. Integrating deep learning techniques for personalized learning pathways in higher education. Heliyon. 2024; 10(11):e32628. doi: 1016/j.heliyon.2024.e32628. PMID: 38961899 PMCID: PMC11219980
- Ng DTK, Tan CW, Leung JKL. Empowering student self‐regulated learning and science education through ChatGPT: A pioneering pilot study. British Journal of Educational Technology. 2024; 55(4): 1328-53. doi: 1111/bjet.13454.
- Huang AY, Lu OH, Yang SJ. Effects of artificial Intelligence–Enabled personalized recommendations on learners’ learning engagement, motivation, and outcomes in a flipped classroom. Computers & Education. 2023; 194: 104684. doi: 1016/j.compedu.2022.104684.
- Salvagno M, Taccone FS, Gerli AG. Correction to: Can artificial intelligence help for scientific writing? Crit Care. 2023; 27(1):99. doi: 1186/s13054-023-04390-0. PMID: 36890525 PMCID: PMC9993712
- Granjeiro JM, Cury AADB, Cury JA, Bueno M, Sousa-Neto MD, Estrela C. The Future of Scientific Writing: AI Tools, Benefits, and Ethical Implications. Braz Dent J. 2025; 36: e256471. doi: 1590/0103-644020256471. PMID: 40197923 PMCID: PMC11981593
- Fricke S. Semantic Scholar. Journal of the Medical Library Association: JMLA, 2018; 106(1), 145–147. doi: 5195/jmla.2018.280.
- Konwar B. Ethical Considerations in The Use of AI Tools Like ChatGPT and Gemini in Academic Research. Turkish Online Journal of Qualitative Inquiry. 2025; 16(1). doi:53555//8x4z4836.
- Barrot JS. Balancing Innovation and Integrity: An Emerging Technology Report on SciSpace in Academic Writing. Technology, Knowledge and Learning. 2025; 30(1): 587–92. doi: 1007/s10758-024-09802-w.
- Huang J, Tan M. The role of ChatGPT in scientific communication: writing better scientific review articles. Am J Cancer Res. 2023; 13(4):1148-1154. PMID: 37168339 PMCID: PMC10164801
- Meliante LA, Coco G, Rabiolo A, De Cillà S, Manni G. Evaluation of AI Tools Versus the PRISMA Method for Literature Search, Data Extraction, and Study Composition in Glaucoma Systematic Reviews: Content Analysis. JMIR AI. 2025; 4: e68592. doi: 2196/68592. PMID: 40911843 PMCID: PMC12413140
- Dalkey N, Helmer O. An experimental application of the Delphi method to the use of experts. Management Science. 1963; 9(3): 458-67. doi:1287/mnsc.9.3.458.
- Rahmani A, Vaziri Nejad R, Ahmadi Nia H, Rezaian M. Methodological Principles and Applications of the Delphi Method: A Narrative Review. Journal of Rafsanjan University of Medical Sciences. 2020; 19(5): 515-38. doi:29252/jrums.19.5.515. [In Persian]
- Ahmadi A, Noetel M, Parker P, Ryan RM, Ntoumanis N, Reeve J, et al. A classification system for teachers’ motivational behaviors recommended in self-determination theory interventions. Journal of Educational Psychology. 2023; 115(8): 1158-76. doi: 1037/edu0000783.
- Okoli C, Pawlowski SD. The Delphi method as a research tool: an example, design considerations and applications. Information & Management. 2004; 42(1): 15-29. doi: 1016/j.im.2003.11.002.
- Linstone HA, Turoff M. The Delphi method: techniques and applications. Massachustts; Addison-Wesley Pub; 2002:616.
- Khodyakov D, Grant S, Kroger J, Bauman M. RAND methodological guidance for conducting and critically appraising Delphi panels. Santa Monica, California: RAND Corporation; 2023.
- Diefenbach MA, Weinstein ND, O'Reilly J. Scales for assessing perceptions of health hazard susceptibility. Health Educ Res. 1993; 8(2):181-92. doi: 1093/her/8.2.181. PMID: 10148827
- Chiu TKF. A classification tool to foster self-regulated learning with generative artificial intelligence by applying self-determination theory: A case of ChatGPT. Educational Technology Research and Development. 2024; 72(4): 2401-16. doi: 1007/s11423-024-10366-w.
- Shang Z. Use of Delphi in health sciences research: A narrative review. Medicine (Baltimore). 2023; 102(7):e32829. doi: 1097/MD.0000000000032829. PMID: 36800594 PMCID: PMC9936053
- Gottlieb M, Caretta-Weyer H, Chan TM, Humphrey-Murto S. Educator's blueprint: A primer on consensus methods in medical education research. AEM Educ Train. 2023; 7(4):e10891. doi: 1002/aet2.10891. PMID: 37448627 PMCID: PMC10336022
- Diamond IR, Grant RC, Feldman BM, Pencharz PB, Ling SC, Moore AM, et al. Defining consensus: a systematic review recommends methodologic criteria for reporting of Delphi studies. J Clin Epidemiol. 2014; 67(4):401-9. doi: 1016/j.jclinepi.2013.12.002. PMID: 24581294
- Lawshe CH. A quantitative approach to content validity. Personnel Psychology. 1975; 28(4): 563-75. doi: 1111/j.1744-6570.1975.tb01393.x.
- Waltz CF, Bausell RB. Nursing research: Design, statistics, and computer analysis. Philadelphia: F.A. Davis Co; 1981: 362.
- Huang AYQ, Lin CY, Su SY, Yang SJH. The impact of GenAI‐enabled coding hints on students' programming performance and cognitive load in an SRL‐based Python course. British Journal of Educational Technology. 2025; 56: 1942-72. doi: 1111/bjet.13589.
- Zimmerman BJ. Becoming a Self-Regulated Learner: An Overview. Theory Into Practice. 2002; 41(2): 64–70. doi:1207/s15430421tip4102_2.
- Lee HY, Chen PH, Wang WS, Huang YM, Wu TT. Empowering ChatGPT with guidance mechanism in blended learning: Effect of self-regulated learning, higher-order thinking skills, and knowledge construction. International Journal of Educational Technology in Higher Education. 2024; 21(1): 16. doi: 1186/s41239-024-00447-4.
- Badakhshian SS, Samiei F. The effectiveness of a vocational rehabilitation program based on cognitive-social theory on the self-efficacy of individuals with spinal cord injury. Journal of Counseling Research. 2022; 21(83): 193-210. doi:18502/qjcr.v21i83.11091. [In Persian]
- Sohrabi A, Mahdad A, Sadeghi A, Sadeghi A. The effectiveness of cognitive-social counseling of Lent and Brown's pleasure on the components of the perception of academic self-efficacy of Isfahan medical students. Iranian Journal of Education in Medical Sciences. 2014; 22(45): 305-13. doi: 48305/22.10. [In Persian]
- Hashemian M, Abdolkarimi M, Asadollahi Z, Nasirzadeh M. Effect of “Social Cognitive Theory”-based Intervention on Promoting Physical Activity in Female High-School Students of Rafsanjan City, Iran. Journal of Education and Community Health. 2021; 8(2): 111-119.