Strides in Development of Medical Education

Document Type : Original Article

Authors

1 M.B.B.S., M.D., Assistant Professor, Department of Pharmacology, All India Institute of Medical Sciences (AIIMS) Bibinagar, Hyderabad, Telangana, India

2 M.B.B.S., M.D., Professor & Head, Department of Pharmacology, Sri Venkateshwaraa Medical College Hospital and Research Centre (SVMCH & RC), Puducherry, India

Abstract

Background: Though there is an increase in the number of scientific programs, the quality of these sessions is not always optimal.
Objectives: Our objective was to assess the knowledge, attitude, and practice of medical fraternity with regard to participation in scientific programs.
Methods: A total of 103 faculty members and postgraduates from all specialties of Sri Venkateshwaraa Medical College Hospital and Research Centre, Puducherry, India, who attended at least one scientific program (conferences, workshops, symposiums, panel discussions, or CMEs) in the past one year, were included. This was a cross-sectional questionnaire-based study conducted over a period of three months in 2019. The content validity index (CVI) was computed to ascertain the validity of the questionnaire. Principal component analysis (PCA) followed by the calculation of Cronbach’s alpha was conducted to ascertain the reliability of the questionnaire. The self-developed and validated questionnaire was distributed among respondents, and necessary filling out instructions were elaborated to them.
Results: Out of 85 participants, 96.5% and 74% responded correctly to the definitions of workshop (n = 82) and conference (n = 63), respectively. The CVIs of individual questionnaire items were higher than 75%, and Cronbach’s alpha of the questionnaire was obtained as 0.60. The mean knowledge score was 3.14 ± 1.3, and demographic characteristics were not found to influence the knowledge score (p > 0.05). The ‘expertise of resource persons’ and ‘necessity of the topic’ were the major factors determining the tendency for participation in scientific programs, as agreed by 81.2% and 80% of the respondents, respectively (p < 0.001). Out of 83% of the respondents who were satisfied with their participation in the last scientific program they attended, major reasons for satisfaction were ‘scientific content’ (63%) and ‘resource persons, speakers, or trainers’ (63%).
Conclusion: The results of this study can be insightful to organizing bodies for better understanding the prerequisites of conducting any scientific session.

Keywords

Background

A variety of scientific programs are conducted in the context of medical fields, from conferences, workshops, symposiums, seminars, and panel discussions to continued medical education (CME). Each program is unique in its way of organization and execution. The objective of individual programs may also vary even if the topic is the same (1). With rapid advances in medical sciences, there is a need to update one’s knowledge and skills instantaneously. Certainly, scientific programs have mushroomed to far greater numbers than before, supporting the aforementioned notion.

Since 2011, as noted in the ‘Code of Medical Ethics Regulations’, the Medical Council of India (MCI) has entrusted every doctor to obtain a minimum of 30 hours of CME credit points in a 5-year period to ensure the maintenance of good medical practice (2, 3). Moreover, some State Medical Councils have also made it mandatory to acquire stipulated scientific program-related credit hours for the renewal (re-licensure for medical practice) of medical registration (4, 5). Postgraduates in medicine are also supposed to perform paper presentations as a part of partial fulfillment for graduation, making participation in these academic programs particularly important to them (6).

Outside India, the same credit point (hours)-based system is followed even more rigorously, particularly for the revalidation or recertification of medical practitioners (7). Despite an increase in the number of scientific programs held regularly, it is sometimes felt that the quality of these scientific programs is not up to expectations. Most often, participants are not fully satisfied with the scientific program in one way or another (2). The call for predatory scientific meetings is also increasing in number throughout the globe, particularly targeting young academicians from developing nations. These fraudulent conventions are organized by exploitative and non-qualified groups for revenue instead of scientific purposes (8, 9).

Hence, we here aimed to evaluate the knowledge, attitude, and practice of faculty members and medical postgraduates with regard to participation in scientific programs in a tertiary care teaching institute.
The knowledge, attitude, and practice, or in short 'KAP', studies aim to explore the awareness, concerns, and actions of a particular community (here, healthcare professionals) towards a particular subject of interest (10).

To the best of our knowledge, so far, no similar studies have engaged in gauging the understanding, beliefs, and behaviors of medical professionals toward attending scientific programs.

Objectives

Our objective was to assess the knowledge, attitude, and practice of medical professionals with regard to participation in scientific programs.

Methods

Study Design, Setting, and Subjects: This cross-sectional questionnaire-based study was conducted in Sri Venkateshwaraa Medical College Hospital and Research Centre (SVMCH & RC), Puducherry, India (a tertiary care teaching hospital).

Faculty members and postgraduates, from all specialties, who attended at least one scientific program in the past year, were eligible. All cadres of faculty members from professors, associate professors, assistant professors, and senior residents to tutors were eligible to participate in the study. Scientific programs, under the purview of this study, comprised conferences, workshops, symposiums, panel discussions, and CMEs conducted inside or outside of our institute.

Faculty members and postgraduates who attended only online training programs, certificate courses, and routine intradepartmental activities like journal clubs, subject review presentations, and debates were excluded from the study.

Sample Size: The sample size was calculated based on the below formula designed for estimating a percentage or proportion in a finite population:

Where z = 1.96 (for 95%) [standard deviation], p = 50% [sample proportion], q = 50% [i.e., (1 – p)], N = 266 (total number of faculty members and postgraduates [size of the population], and e = 10% [acceptable error; conventionally considered as 10%].

Hence, with a margin of error of 10%, a confidence level of 95%, a population size of 266, and a response distribution of 50%, the minimum recommended sample size for the survey would be around 80 [considering a drop-out rate of 10%].

Study Procedure

The principles of the Declaration of Helsinki following good clinical practice were strictly adhered to during the entire course of the study.

A non-random convenience sampling technique was adopted. Eligible faculty members and postgraduates were explained about the details of the study, and written informed consent was obtained from them. A pre-validated questionnaire was distributed among the participants, and necessary instructions were elaborated to them on how to complete the questionnaire. On average, 10 to 15 minutes were given to fill in the questionnaire. Completed questionnaires were recollected in a closed envelope to maintain anonymity.

The questionnaire contained 15 items, with 5 items in the ‘knowledge’ section, 6 items in the ‘attitude’ section, and 4 items in the ‘practice’ section (Appendix). The 'knowledge' section composed of partially categorized questions; the 'attitude' section included Likert scale-type questions, and there were both closed-ended and partially categorized questions in the ‘practice’ section. The questionnaire also addressed demographic details in the beginning; however, there was no enquiring about the name of the respondent.

Questionnaire Development: As there were no similar studies, the entire questionnaire was self-developed and then validated. A panel of experts (n = 5), comprising senior faculty members in our institute, meticulously reviewed the questionnaire.
Each expert independently rated the relevance of each item using a 4-point Likert scale (1 = not relevant, 2 = somewhat relevant, 3 = relevant, 4 = completely relevant). Ratings ‘3’ and ‘4’ were together considered as a “favorable” response to the item, so the particular question was considered relevant. Similarly, ratings of ‘1’ and ‘2’ were together considered an “unfavorable” response, so the question was regarded as irrelevant. All 15 items were retained without major modifications as the content validity index (CVI) of all individual items was fairly above the cut-off (Table 1).

The validated 15-item questionnaire was then submitted to the Copyright Office, Government of India, and the copyright was granted with the registration number L-79084/2018.

Questionnaire Items: The questionnaire items 1 to 5 were framed to test knowledge about the definition of various scientific programs. Items 1, 2, 3, 4, and 5 inquired about the definition of workshop, conference, CME, symposium, and panel discussion, respectively. A score of ‘1’ or ‘0’ was awarded for a ‘correct’ or ‘wrong’ response, respectively. The cumulative score was calculated by adding the individual scores of each questionnaire item. Hence, the total score of each respondent could range from 5 (maximum; all answers were correct) to 0 (minimum; no answer was correct).

There were six questionnaire items (6 to 11) under the section of attitude towards participation in scientific programs. The ‘necessity of the topic’, ‘reputation of the organizing body’, ‘expertise of the resource person’, ‘registration fees’, ‘distance from the workplace’, and ‘length of the scientific program’ were described in items 6, 7, 8, 9, 10, and 11, respectively. The items were scored on a 5-point Likert scale with options ranging from ‘strongly agree’, ‘moderately agree’, ‘neutral’, ‘moderately disagree’, to ‘strongly agree’. Items 6, 9, and 10 were reverse coded. Under the practice section, the first two items (items 12 and 13) addressed the frequency of participation in scientific programs. Item 12 was about participation in conferences, symposiums, or CMEs, and item 13 was about participation in workshops or training programs. Item 14 had two parts (A and B). Item 14A questioned satisfaction with the last scientific program on a 5-point Likert scale ranging from ‘completely satisfied’, ‘partially satisfied’, ‘neutral’, ‘partially dissatisfied’, to ‘completely dissatisfied’, and item 14B addressed the reason(s) for ‘satisfaction’ or ‘dissatisfaction’, where the respondent was allowed to select multiple options. The last questionnaire item (i.e., item 15) was a general question about the reason(s) for attending scientific programs, in which selecting multiple options was permissible (Table 2).

Statistical Analysis: Data were expressed as mean ± standard deviation for continuous variables and percentage (%) for categorical variables. The chi-square goodness-of-fit test was used for the analysis of categorical variables. Mann‐Whitney U test was used for comparing continuous non‐parametric variables and chi-square or Fisher’s exact test for comparing categorical variables. Principal component analysis followed by reliability analysis was undertaken to check for the internal consistency of the questionnaire. The data were recorded and analyzed using Microsoft Excel, Office 2010 (Microsoft Corporation, Redmond, WA, USA), GraphPad InStat, version 3.06 (GraphPad Software, San Diego, CA, USA), and SPSS, version 20.0 (SPSS Inc., Chicago, IL, USA).

Results

The questionnaires were distributed among 103 faculty members and postgraduates enrolled based on eligibility criteria. Out of this, 85 questionnaires were included for final analysis (ten questionnaires had incomplete data; five persons declined to participate, and another three failed to return the forms).

The respondents were adequately representative of all departments, namely, Anatomy, Physiology, and Biochemistry (Pre-Clinical Departments); Pharmacology, Pathology, Microbiology, Forensic Medicine, and Community Medicine (Para-Clinical Departments), and Ophthalmology, Otorhinolaryngology, General Medicine, General Surgery, Obstetrics and Gynecology, Pediatrics, Orthopedics, Anesthesiology, Pulmonary Medicine, and Radiodiagnosis (clinical departments). Furthermore, 54.1% of the respondents were from the college (pre- and para-clinical departments), and the remaining 45.9% of the respondents were from the hospital side (clinical departments).

Most of the respondents were assistant professors (37.7%), followed by postgraduates (24.7%), professors (12.9%), associate professors (9.4%), senior residents (9.4%), and tutors (5.9%). Overall, 62.4% of the respondents were male, and most of them had an age around 34 years old. Excluding postgraduates, the respondents possessed an average of 6.4 years of teaching experience (Table 3).

Knowledge

Only 24% of the respondents correctly answered all definitions of scientific programs (the total knowledge score = 5) [Q. 1 to Q. 5]. The total knowledge score was less than 3 in 35% of the participants. The mean knowledge score was 3.14 ± 1.3.

Out of 85 respondents, 82 (96.5%) identified the definition of workshop correctly. The definition of conference was chosen correctly by nearly 74% of
the respondents. Around 50% of the respondents selected the correct option denoting the definition of CME and symposium. However, only 36.5% of the respondents were aware of the definition of panel discussion (Figure 1).

Collectively, around 4% of the knowledge-related items remained either unanswered or the respondents were unsure of the exact answer.

The respondents were categorized based on their total knowledge scores, i.e., those with a high knowledge score (≥ 3) and those with a low knowledge score (< 3), and the association with demographic characteristics was studied. None of the demographic parameters, namely, age, gender, educational qualification, academic position (designation), department, experience, and the number of scientific programs attended were found to be related to the knowledge score (p > 0.05) (Table 4).

Attitude

Overall, 81.2% and 80% of the respondents agreed (including both strong and moderate agreement) that ‘expertise of the resource persons’ and ‘necessity of the topic’ were, respectively, among the major factors determining participation in scientific programs. Similarly, ‘agreement’ was expressed for other factors like ‘reputation of the organizing body’, ‘distance from the workplace’, ‘length of the program’, and ‘registration fees’ with frequencies of 63.5%, 63.5%, 51.8%, and 47.1%, respectively. The chi-square goodness-of-fit test for all categories was statistically significant with
p < 0.001 (Figure 2) [Q. 6 to Q. 11].

Practice

A total of 47% of the respondents attended conferences, symposiums, or CMEs once every three months or more often whereas those who were attending workshops or training programs as frequently as above constituted only 12% of the participants. The chi-square goodness-of-fit test for both categories was statistically significant with p < 0.001 [Q. 12 and Q. 13].

Also, 83% of the respondents were either completely or partially satisfied with their participation in the last scientific program they attended, and only 4% expressed dissatisfaction. Eliminating this 4% and the other 13% of the respondents who had selected the neutral option, 63% of the responses (n = 70) expressed their reasons for ‘satisfaction’ as ‘scientific content’ and ‘resource persons, speakers, or trainers’.

‘Food and accommodation’ and ‘ambience’ were reasons for satisfaction in 21% and 17% of the participants, respectively (Figure 3) [Q. 14].

Most (93%) of the respondents vouched for the option ‘improvement of scientific knowledge or skills – professional development’ as the major driving force for their participation in scientific programs. ‘Establishing more professional contacts’ (59%) and ‘acquiring credit points for academic promotion’ (49%) were the next important motivating factors that encouraged the respondents to attend scientific programs (Figure 4) [Q. 15].

Finally, the respondents were categorized based on the knowledge score, and a bivariate analysis was performed to assess the influence of knowledge on attitude and practice. The results showed no association between the knowledge score and attitude or practice toward participation in scientific programs (p > 0.05). Only the ‘expertise of the resource person’ seemed to be significantly associated with knowledge (p = 0.011); around 89% of respondents with higher knowledge (scores ≥ 3) agreed upon the importance of ‘the expertise of the resource person’ in their decision to attend scientific programs compared to only 66% of respondents with lower knowledge (scores < 3) (Table 5).

Principal Component Analysis and Reliability Analysis

Multiple principal component analysis (PCA) was performed to assess the dimensionality of the variables (items) studied. The initial PCA was run with nine items, i.e., from item 6 to item 14A. The items related to the ‘knowledge’ section and items 14B and 15 were excluded from analysis because ‘knowledge’ items had a single best option, and items 14B and 15 had multiple responses. The reverse-coded ‘attitude’ items (namely, items 6, 9, and 10) were re-coded appropriately before the analysis. The final PCA yielded four components with Eigen values > 1 (the Kaiser-Meyer-Olkin Measure of Sampling Adequacy was 0.492, and Bartlett’s test of sphericity was statistically significant). These four components included five variables (items) with individual factor loadings higher than 0.4 (i.e., items 7, 8, 11, 12, and 13 with factor loadings of 0.719, 0.632, 0.464, 0.442, and 0.561, respectively). Hence, based on the results of PCA, the above five items were subjected to reliability analysis, retrieving Cronbach’s alpha (or coefficient alpha) of 0.60.

Discussion

Overall, only 48% of the study participants knew the definition of CME, symposium, and panel discussion, excluding 4% of the questionnaire items that were unknown or left blank by the respondents. Hence, most of the respondents had a wrong presumption about the definition of scientific programs, particularly, programs conducted less frequently like symposiums and panel discussions compared to those held more commonly such as conferences and workshops. Nearly 85% of the respondents were aware of the definitions of workshop and conference. The higher rate of wrong responses for the definition of panel discussion could be due to lack of a concrete option for the same; rather, the respondents needed to fill-in this response by choosing the last blank option.

It was observed that more experienced respondents (around 7 years) had a higher knowledge score (≥ 3) compared to the less experienced respondents (around 5 years), though this difference was not statistically significant.

The dismal understanding of the respondents drives home the message that the medical professionals need to be educated about various kinds of scientific programs that are being conducted. Occasionally, the organizers of scientific programs are to be blamed as they hold programs under the wrong label – like designating a symposium or panel discussion as a CME. The justification for such errors in the description could be the fine line of demarcation that separates these scientific programs. Hence, the onus lies with medical education departments to enlighten both organizers and participants about various types of scientific programs.

Indirectly, the ‘attitude’ dimension measured major contributors to participation in a scientific program from delegates’ perspectives. The major motivating factors contributing to participation in scientific programs were found to be the ‘need for the topic’ and the ‘expertise of the resource person’ based on the responses of more than 80% of the respondents. On the other hand, the ‘registration fees’ and ‘duration’ of the scientific program played a lesser role, as mentioned by around 50% of the respondents. An online survey by Lang et al. (11) revealed similar results wherein 90% of the faculty members who participated in the study noted the following three factors as the most important criteria for choosing to participate in a conference, namely, ‘topics being in the areas of interest’, ‘well-known, respected plenary speakers’, and ‘likelihood for cutting-edge research being presented’; while relatively fewer respondents considered the ‘location’, ‘time’, and ‘cost’ of the conference important elements.

The relative reluctance of academics to attend workshops or training programs is detrimental as these programs can drastically upscale their hands-on skills and knowledge on relevant compared to more didactic and less-participant-centered conferences, symposiums, or CMEs (12).

Based on our results, to boost the satisfaction of attendees, organizers need to concentrate more on the scientific content and resource persons than on the food, accommodation, or ambience. Similarly, the key drivers luring the scientific community to attend scientific programs could be the improvement of scientific knowledge or skills and the development of professional contacts, as well as gathering credit points for promotion.

The responsibility of program organizers is to maintain the academic robustness of the scientific program they intend to conduct. Rigorous steps need to be followed when conducting these programs from selecting the theme or topic, deciding on the speakers or resource persons, choosing appropriate dates and venues, preparing brochures, inviting delegates, and finally, organizing the event methodologically (13, 14). Post-program evaluation of academic sessions, particularly with regard to CMEs, is necessary to ascertain the effectiveness of the program (15).

The strikingly lower positive responses to non-academic options like food, accommodation, ambience, places, or meeting friends and relatives may not be a true reflection of the practice. Basically, KAP questionnaire studies capture the declarative opinions of respondents and as such substantial gaps may exist between what is said and what is done, which is a known limitation of any KAP study.

Cronbach’s alpha of the present instrument was obtained as 0.60, and this was justifiable as the current study was rather an exploratory one with a relatively small sample size (16, 17).

Like innovations in medical science research, there is a need for more novel strategies to conduct scientific programs to break the monotony, and let the participant to be more engaged with the subject (18). Video- or tele-conferences, webinars, podcasts, webcasting, and other repurposed meetings conducted through interactive social media are some of the potential alternatives. Moreover, the focus should be on the quality of academic sessions rather than on their frequency (19, 20). Sometimes, a combination of both conventional didactic and more interactive programs is required to increase the effectiveness of educational meetings (21).

Some limitations of this study included the relatively small sample size and the lack of representativeness of the study population of all medical professionals in other parts of India and elsewhere outside the country.

Conclusion

Our respondents were better aware of the definitions of conference and workshop than other types of scientific programs. The ‘necessity of the topic’ and the ‘expertise of resource persons’ were the two foremost determining factors for participation in a scientific program. The participants tend to attend conferences, symposiums, or CMEs more than workshops or training programs. The major elements increasing satisfaction with programs were ‘scientific content’ and ‘resource persons, speakers, or trainers’ as opined by those who were satisfied with their last attendance to a scientific program. Likewise, the major driving force for participation in a scientific meeting was the necessity for the improvement of scientific knowledge or skills and professional development.

This questionnaire-based survey can offer a useful tool to explore the needs of medical healthcare professionals attending a scientific program. The results of this study can be used by organizing bodies to identify the prerequisites of conducting a scientific session.

  1. Kreps GL. Communication for health education. In: Park. K. Park’s Textbook of Preventive and Social Medicine. 25th Ed. Jabalpur: M/s Banarsidas Bhanot; 2019.
  2. Das S, Shah M, Mane A, Goyal V, Singh V, Lele J. Accreditation in India: Pathways and Mechanisms. J Eur CME. 2018 Apr 4;7(1):1454251. doi: 1080/21614083.2018.1454251. [PMID: 29755849] [PMCID: PMC5912189]
  3. Medical Council of India. Code of Medical Ethics Regulations, 2002. [cited 2024 July 21]. Available from: URL: https://www.nmc.org.in/rules-regulations/code-of-medical-ethics-regulations-2002/
  4. Tamilnadu Medical Council, Chennai. Guidelines for Accreditation of Continuing Education Programme [CEP]. [cited 2019 May 28]. Available from: URL: https://www.coimbatoreima.com/credithrs.pdf
  5. Maharashtra Medical Council, Mumbai. New policy for holding the CMEs & Credit Points. [Cited 2019 May 28]. Available from: URL: https://www.maharashtramedicalcouncil.in/cme/cmenotice/new%20cme%20guidelines.pdf
  6. Lo C-H. Medical Student Perspective: Why Medical Students Should Attend Conferences. American College of Physicians. 2016. [Cited 2024 July 21]. Available from: URL: https://www.acponline.org/membership/medical-students/acp-impact/archive/may-2016/medical-student-perspective-why-medical-students-should-attend-conferences
  7. Peck C, McCall M, McLaren B, Rotem T. Continuing medical education and continuing professional development: international comparisons. BMJ. 2000 Feb 12;320(7232):432-5. doi: 1136/bmj.320.7232.432. [PMID: 10669451] [PMCID: PMC1117549]
  8. Cress PE. Are Predatory Conferences the Dark Side of the Open Access Movement? Aesthet Surg J. 2017 Jun 1;37(6):734-738. doi: 1093/asj/sjw247. [PMID: 28158556]
  9. Mercier E, Tardif P-A, Moore L, Le Sage N, Cameron PA. Invitations received from potential predatory publishers and fraudulent conferences: a 12-month early-career researcher experience. Postgrad Med J. 2018 Feb;94(1108):104-108. doi: 1136/postgradmedj-2017-135097. [PMID: 28912190] [PMCID: PMC5800329]
  10. Kaliyaperumal K. Guideline for Conducting a Knowledge, Attitude and Practice (KAP) Study. AECS Illumination. [cited 2024 July 21]. Available from: http://v2020eresource.org/content/files/guideline_kap_Jan_mar04.pdf.
  11. Lang R, Mintz M, Krentz HB. An approach to conference selection and evaluation: advice to avoid “predatory” conferences. Scientometrics. 2019;118(2):687-98. doi: 1007/s11192-018-2981-6.
  12. Ting J. A shift from passive teaching at medical conferences to more interactive methods improves physician learning. Med Teach. 2007 Mar; 29(2-3):285. doi: 1080/01421590701287939. [PMID: 17701651]
  13. Ghosh AK. Organizing an effective continuous medical education session. J Assoc Physicians India. 2008 Jul:56:533-8. [PMID: 18846906]
  14. Wittich CM, Chutka DS, Mauck KF, Berger RA, Litin SC, Beckman TJ. Perspective: a practical approach to defining professional practice gaps for continuing medical education. Acad Med. 2012 May;87(5):582-5. doi: 1097/ACM.0b013e31824d4d5f. [PMID: 22450184]
  15. Tian J, Atkinson NL, Portnoy B, Gold RS. A systematic review of evaluation in formal continuing medical education. J Contin Educ Health Prof. 2007 Winter;27(1):16-27. doi: 1002/chp.89. [PMID: 17385741]
  16. Robinson JP, Shaver PR, Wrightsman LS. Criteria for Scale Selection and Evaluation. In: Robinson JP, Shaver PR, Wrightsman LS, eds. Measures of Personality and Social Psychological Attitudes. San Diego, CA: Academic Press; 1991:1-16. doi:1016/B978-0-12-590241-0.50005-8.
  17. Hair JF Jr. Black WC, Babin BJ, Anderson RE. Multivariate Data Analysis. In: Hair JF Jr. Black WC, Babin BJ, Anderson RE, eds. Exploratory Factor Analysis. Edinburgh Gate, Harlow: Pearson Education Limited; 2014:89-150.
  18. Davies FC, Cheema B, Carley SD. Innovation in the field of medical-conference-based education: a new marketplace. Emerg Med J. 2015 Oct;32(10):756-8. doi: 1136/emermed-2015-204718. [PMID: 26101405]
  19. Ioannidis JPA. Are medical conferences useful? And for whom? JAMA. 2012 Mar 28;307(12):1257-8. doi: 1001/jama.2012.360. [PMID: 22453564]
  20. Mishra S. Do medical conferences have a role to play? Sharpen the saw. Indian Heart J. 2016 Mar-Apr;68(2):111-3. doi: 1016/j.ihj.2016.03.011. [PMID: 27133315] [PMCID: PMC4867024]
  21. Forsetlund L, Bjørndal A, Rashidian A, Jamtvedt G, O'Brien MA, Wolf F, et al. Continuing education meetings and workshops: effects on professional practice and health care outcomes. Cochrane Database Syst Rev. 2001:(2):CD003030. doi: 1002/14651858.CD003030. [PMID: 11406063]