Effectiveness of E-learning in Pharmacy Education
Effectiveness of E-learning in Pharmacy Education
This systematic review was conducted according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses Statement. The protocol for the review is published elsewhere. We defined specific criteria to allow a focused review of the effectiveness of e-learning in pharmacy education ( Table 1 ). We included any effectiveness research that evaluated e-learning programs in undergraduate, postgraduate, and continuing professional development pharmacy education. We did not set limits on study design, language, or year of publication.
A senior reference librarian at The University of Western Australia's Medical and Dental Library with expertise in conducting systematic literature reviews was consulted as part of the process to develop a comprehensive search strategy ( Table 2 ). Databases were searched from inception to June 4, 2013. The review was conducted using the Web-based systematic review software, DistillerSR (Evidence Partners Incorporated, Ottawa, Canada). All identified citations were uploaded to DistillerSR and duplicates were removed. We developed forms for title/abstract and full-text screening according to the stated eligibility criteria, and pilot tested them before implementing them in study selection. Two reviewers independently and in duplicate screened all titles and abstracts. Potentially eligible abstracts, abstracts where reviewers disagreed, or abstracts with insufficient information were retrieved for full text review. Two reviewers then assessed the eligibility of each study in duplicate, and a final list of studies was determined. Agreement between reviewers was measured using Cohen's kappa (weighted kappa for title/abstract screen was 0.75, and for full text screen, 0.88, estimated using DistillerSR). Conflicts were resolved by consensus. Reasons for exclusion were documented and are presented in Figure 1.
(Enlarge Image)
Figure 1.
Systematic review flow. Studies may have contributed more than one effectiveness outcome measure. Reaction=satisfaction and course opinions. Learning=change in attitudes, knowledge or skills (including perceptions of these). Behavior=practice change (actual or willingness to change). Results=organizational change and patient benefit.
Two reviewers independently abstracted data using a series of dedicated forms we developed based on the Evidence for Policy and Practice Information and Coordinating Centre (EPPI-Centre) data extraction and coding tool for education studies. These forms were piloted and refined prior to data abstraction, and applied through DistillerSR. We assessed reviewer agreement in data abstraction using Cohen's kappa, where 0=no agreement and 1=complete agreement. We abstracted data on study characteristics (study aims, location, participants, intervention topic, and assessment; kappa range 0.53–1); study design and methodology (sampling and recruitment, blinding, power, funding; kappa range 0.43–1); data collection and analysis (how data were collected, use and reliability of tools, statistical analysis; kappa range 0.48–1); and outcomes. As the focus of this review was on effectiveness, we sought information for outcomes that measured change after the e-learning intervention was delivered. The form for learning outcomes identified knowledge or skills change, and problem-solving ability (kappa range 0.55–1). The form for behavior and results outcomes identified willingness to change behavior or practice change, and organizational change or patient benefit (kappa range 0.64–1). The form for reaction outcomes identified satisfaction, attitudes and opinions (kappa range 0.48–1). Finally and where relevant, we contacted authors by e-mail to request missing data.
We expected the studies to be diverse, to include both qualitative and quantitative designs, to consist in the majority as noncomparative studies, and by the very nature of e-learning interventions, to be limited in their ability to conceal the intervention from the participant. Further, acknowledging that quality assessment of education intervention studies is complex, we considered no single published quality assessment tool to be appropriate for this review. However, aspects of 3 published tools were considered relevant to quality: the Cochrane Risk of Bias Criteria for Effective Practice and Organisation of Care reviews tool, the NICE quality appraisal checklist, and the EPPI-Centre data extraction and coding tool for education studies. In order to develop a more robust assessment of quality, we developed a quality assessment tool that included relevant aspects from each of the published tools as well as additional criteria, and embedded the assessment within the data abstraction forms in DistillerSR ( Appendix 1 ).
We concurrently assessed the impact of each intervention in terms of Kirkpatrick's hierarchy, and strength of findings for each study in terms of the BEME weight of evidence rating scale (strength 1=no clear conclusions can be drawn, not significant; 2=results ambiguous, but there appears to be a trend; 3=conclusions can probably be based on the results; 4=results are clear and very likely to be true; 5=results are unequivocal).
There was significant variation between studies, in terms of design, intervention, duration, assessment method, and outcome. There were few controlled studies, and every study assessed a different topic within pharmacy education. Few studies reported sufficient data to enable calculation of a combined effect size, and there was limited response to requests for data. Given contextual limitations on methodology in education research (and the associated complication of interpreting education outcomes), the risks associated with evidence from uncontrolled studies and from imputing data, it was not possible or appropriate to conduct a meta-analysis for any outcome.
We adopted a modified meta-narrative approach to synthesis. We considered how e-learning effectiveness was conceptualized, using key outcome measures and how they were assessed in each study. To start, outcomes were broadly themed according to the 4 levels of Kirkpatrick's hierarchy. Depending on how the outcome was defined (eg, perceived confidence, actual knowledge) and measured (eg, rating scales, formal test), we then iteratively categorized the results of each study to yield a detailed map of e-learning effectiveness in pharmacy education.
Methods
This systematic review was conducted according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses Statement. The protocol for the review is published elsewhere. We defined specific criteria to allow a focused review of the effectiveness of e-learning in pharmacy education ( Table 1 ). We included any effectiveness research that evaluated e-learning programs in undergraduate, postgraduate, and continuing professional development pharmacy education. We did not set limits on study design, language, or year of publication.
A senior reference librarian at The University of Western Australia's Medical and Dental Library with expertise in conducting systematic literature reviews was consulted as part of the process to develop a comprehensive search strategy ( Table 2 ). Databases were searched from inception to June 4, 2013. The review was conducted using the Web-based systematic review software, DistillerSR (Evidence Partners Incorporated, Ottawa, Canada). All identified citations were uploaded to DistillerSR and duplicates were removed. We developed forms for title/abstract and full-text screening according to the stated eligibility criteria, and pilot tested them before implementing them in study selection. Two reviewers independently and in duplicate screened all titles and abstracts. Potentially eligible abstracts, abstracts where reviewers disagreed, or abstracts with insufficient information were retrieved for full text review. Two reviewers then assessed the eligibility of each study in duplicate, and a final list of studies was determined. Agreement between reviewers was measured using Cohen's kappa (weighted kappa for title/abstract screen was 0.75, and for full text screen, 0.88, estimated using DistillerSR). Conflicts were resolved by consensus. Reasons for exclusion were documented and are presented in Figure 1.
(Enlarge Image)
Figure 1.
Systematic review flow. Studies may have contributed more than one effectiveness outcome measure. Reaction=satisfaction and course opinions. Learning=change in attitudes, knowledge or skills (including perceptions of these). Behavior=practice change (actual or willingness to change). Results=organizational change and patient benefit.
Two reviewers independently abstracted data using a series of dedicated forms we developed based on the Evidence for Policy and Practice Information and Coordinating Centre (EPPI-Centre) data extraction and coding tool for education studies. These forms were piloted and refined prior to data abstraction, and applied through DistillerSR. We assessed reviewer agreement in data abstraction using Cohen's kappa, where 0=no agreement and 1=complete agreement. We abstracted data on study characteristics (study aims, location, participants, intervention topic, and assessment; kappa range 0.53–1); study design and methodology (sampling and recruitment, blinding, power, funding; kappa range 0.43–1); data collection and analysis (how data were collected, use and reliability of tools, statistical analysis; kappa range 0.48–1); and outcomes. As the focus of this review was on effectiveness, we sought information for outcomes that measured change after the e-learning intervention was delivered. The form for learning outcomes identified knowledge or skills change, and problem-solving ability (kappa range 0.55–1). The form for behavior and results outcomes identified willingness to change behavior or practice change, and organizational change or patient benefit (kappa range 0.64–1). The form for reaction outcomes identified satisfaction, attitudes and opinions (kappa range 0.48–1). Finally and where relevant, we contacted authors by e-mail to request missing data.
We expected the studies to be diverse, to include both qualitative and quantitative designs, to consist in the majority as noncomparative studies, and by the very nature of e-learning interventions, to be limited in their ability to conceal the intervention from the participant. Further, acknowledging that quality assessment of education intervention studies is complex, we considered no single published quality assessment tool to be appropriate for this review. However, aspects of 3 published tools were considered relevant to quality: the Cochrane Risk of Bias Criteria for Effective Practice and Organisation of Care reviews tool, the NICE quality appraisal checklist, and the EPPI-Centre data extraction and coding tool for education studies. In order to develop a more robust assessment of quality, we developed a quality assessment tool that included relevant aspects from each of the published tools as well as additional criteria, and embedded the assessment within the data abstraction forms in DistillerSR ( Appendix 1 ).
We concurrently assessed the impact of each intervention in terms of Kirkpatrick's hierarchy, and strength of findings for each study in terms of the BEME weight of evidence rating scale (strength 1=no clear conclusions can be drawn, not significant; 2=results ambiguous, but there appears to be a trend; 3=conclusions can probably be based on the results; 4=results are clear and very likely to be true; 5=results are unequivocal).
There was significant variation between studies, in terms of design, intervention, duration, assessment method, and outcome. There were few controlled studies, and every study assessed a different topic within pharmacy education. Few studies reported sufficient data to enable calculation of a combined effect size, and there was limited response to requests for data. Given contextual limitations on methodology in education research (and the associated complication of interpreting education outcomes), the risks associated with evidence from uncontrolled studies and from imputing data, it was not possible or appropriate to conduct a meta-analysis for any outcome.
We adopted a modified meta-narrative approach to synthesis. We considered how e-learning effectiveness was conceptualized, using key outcome measures and how they were assessed in each study. To start, outcomes were broadly themed according to the 4 levels of Kirkpatrick's hierarchy. Depending on how the outcome was defined (eg, perceived confidence, actual knowledge) and measured (eg, rating scales, formal test), we then iteratively categorized the results of each study to yield a detailed map of e-learning effectiveness in pharmacy education.
Source...