Skip to main content

Handbook of eHealth Evaluation: An Evidence-based Approach: Chapter 27: Future of eHealth Evaluation

Handbook of eHealth Evaluation: An Evidence-based Approach
Chapter 27: Future of eHealth Evaluation
  • Show the following:

    Annotations
    Resources
  • Adjust appearance:

    Font
    Font style
    Color Scheme
    Light
    Dark
    Annotation contrast
    Low
    High
    Margins
  • Search within:
    • Notifications
    • Privacy
  • Project HomeHandbook of eHealth Evaluation: An Evidence Based Approach
  • Projects
  • Learn more about Manifold

Notes

table of contents
  1. Cover
  2. Half Title Page
  3. Title and Copyright
  4. Contents
  5. List of Tables and Figures
  6. Preface
  7. Acknowledgements
  8. Introduction
    1. What is eHealth? by Francis Lau, Craig Kuziemsky
    2. What is eHealth Evaluation? by Francis Lau, Craig Kuziemsky
    3. What is in this Handbook? by Francis Lau, Craig Kuziemsky
  9. Part I: Conceptual Foundations
    1. 1. Need for Evidence, Frameworks and Guidance by Francis Lau
    2. 2. Benefits Evaluation Framework by Francis Lau, Simon Hagens, Jennifer Zelmer
    3. 3. Clinical Adoption Framework by Francis Lau, Morgan Price
    4. 4. Clinical Adoption Meta-Model by Morgan Price
    5. 5. eHealth Economic Evaluation Framework by Francis Lau
    6. 6. Pragmatic Health Information Technology Evaluation Framework by Jim Warren, Yulong Gu
    7. 7. Holistic eHealth Value Framework by Francis Lau, Morgan Price
  10. Part II: Methodological Details
    1. 8. Methodological Landscape for eHealth Evaluation by Craig Kuziemsky, Francis Lau
    2. 9. Methods for Literature Reviews by Guy Paré, Spyros Kitsiou
    3. 10. Methods for Comparative Studies by Francis Lau, Anne Holbrook
    4. 11. Methods for Descriptive Studies by Yulong Gu, Jim Warren
    5. 12. Methods for Correlational Studies by Francis Lau
    6. 13. Methods for Survey Studies by Francis Lau
    7. 14. Methods for eHealth Economic Evaluation Studies by Francis Lau
    8. 15. Methods for Modelling and Simulation Studies by James G. Anderson, Rong Fu
    9. 16. Methods for Data Quality Studies by Francis Lau
    10. 17. Engaging in eHealth Evaluation Studies by Craig Kuziemsky, Francis Lau
  11. Part III: Selected eHealth Evaluation Studies
    1. 18. Value for Money in eHealth: Meta-synthesis of Current Evidence by Francis Lau
    2. 19. Evaluation of eHealth System Usability and Safety by Morgan Price, Jens Weber, Paule Bellwood, Simon Diemert, Ryan Habibi
    3. 20. Evaluation of eHealth Adoption in Healthcare Organizations by Jim Warren, Yulong Gu
    4. 21. Evaluation of Picture Archiving and Communications Systems by Don MacDonald, Reza Alaghehbandan, Doreen Neville
    5. 22. Evaluation of Provincial Pharmacy Network by Don MacDonald, Khokan C. Sikdar, Jeffrey Dowden, Reza Alaghehbandan, Pizhong Peter Wang, Veeresh Gadag
    6. 23. Evaluation of Electronic Medical Records in Primary Care: A Case Study of Improving Primary Care through Information Technology by Lynne S. Nemeth, Francis Lau
    7. 24. Evaluation of Personal Health Services and Personal Health Records by Morgan Price, Paule Bellwood, Ryan Habibi, Simon Diemert, Jens Weber
    8. 25. Evaluating Telehealth Interventions by Anthony J. Maeder, Laurence S. Wilson
  12. Part IV: Future Directions
    1. 26. Building Capacity in eHealth Evaluation: The Pathway Ahead by Simon Hagens, Jennifer Zelmer, Francis Lau
    2. 27. Future of eHealth Evaluation: A Strategic View by Francis Lau
  13. Glossary
  14. About the Contributors

Chapter 27
Future of eHealth Evaluation
A Strategic View
Francis Lau
27.1 Introduction
In this handbook we have examined both the science and the practice of eHealth evaluation in different contexts. The first part of the handbook, on conceptual foundations, has provided examples of organizing schemes that can help make sense of eHealth as interdependent sociotechnical systems, and how these systems can be defined and measured. Depending on the purpose and scope of the planned evaluation, an eHealth system may be conceptualized under different assumptions and viewed from multiple lenses in terms of its makeup, behaviours and consequences. For example, an eHealth system may be evaluated in a narrow context at the micro level as an artefact for its technical performance under the information system quality dimension of the Benefits Evaluation Framework. Alternatively, the evaluation may take on a broader scope focusing on the macro-level governance, standards, and funding dimensions of the Clinical Adoption Framework.
The second part of the handbook concerns methodological details and has provided a collection of research approaches that can be applied to address different eHealth evaluation questions. They range from such quantitative methods as comparative and correlational studies to such qualitative methods as descriptive and survey studies. There are also methods that utilize both qualitative and quantitative data sources such as economic evaluation, modelling and data quality studies. In addition, there are published guidelines that can enhance the reporting quality of eHealth evaluation studies. The repertoire of such methods offers ample choice for the evaluator to plan, conduct, publish and appraise eHealth evaluation studies to ensure they are simultaneously rigorous, pragmatic and relevant. The third part of the handbook, on selected eHealth evaluation studies, has provided detailed examples of field studies to demonstrate how the scientific principles of select eHealth evaluation frameworks and methods have been applied in practice within different settings.
The last part of the handbook on future directions addresses, first, the need to build capacity in eHealth evaluation and, second, the shifting landscape for eHealth evaluation within the larger healthcare delivery system. This final chapter of the handbook offers some observations on what this future may hold in the years ahead. This discussion is outlined under the topics of eHealth as a form of complex intervention, the need for guiding principles on eHealth evaluation methods, and taking a more strategic view of eHealth evaluation as part of the larger healthcare system. The chapter closes with some final remarks on key take-home messages on eHealth evaluation for readers.
27.2 eHealth as a Complex Intervention
There is growing recognition that healthcare interventions can be highly complex in nature. This can be due to the number of interacting components that exist in a given intervention, the types of behaviours required by those delivering and receiving the intervention, the number of targeted groups or organizations involved, variability in expected outcomes, and the degree of tailoring permitted in the intervention. Such complexity can lead to variable study findings and an apparent lack of tangible impact from the intervention (Craig et al., 2008).
According to Shcherbatykh, Holbrook, Thabane, and Dolovich (2008), eHealth systems are considered complex interventions since they are often made up of multiple technical and informational components influenced by different organizational, behavioural and logistical factors. The technical components include the eHealth system’s hardware, software, interface, cust­om­­iz­ability, implementation and integration. The informational components include the oper­ational logic, clinical expertise, clinical importance, evidence-based guidelines, communication processes and promotion of action. The organizational factors that can influence the system include its financing, management and training, the degree of vendor support, the stance of local opinion leaders, and feedback given and received. The behavioural factors include user satisfaction, attitudes, motivation, expectations, interdisciplinary interaction and self-education. The logistical factors include system design, workflow, compatibility, local user involvement, ownership, technological sophistication and convenience of access. Collectively these components and factors can interact in an unpredictable fashion over time to produce the types of emergent system functions, behaviours and consequences that are observed.
For complex eHealth interventions, Eisenstein, Lobach, Montgomery, Kawamoto, and Anstrom (2007) have emphasized the need to understand the intervention components and their interrelationships as prerequisites for effectiveness evaluation. These authors suggested that the overall complexity of an intervention can be a combination of the complexity of the problem being addressed, the intervention itself, inputs and outputs of the healthcare setting and the degree of user involvement. The group has developed the Oxford Implementation Index as a methodology that can be applied to eHealth evaluation (Montgomery, Underhill, Gardner, Operario, & Mayo-Wilson, 2013). This index has four implementation components that can affect intervention fidelity: intervention design; intervention delivery by providers; intervention uptake by patients; and contextual factors. These have been organized as a checklist to assess intervention study results. The checklist items are listed below.
  • Intervention design – refers to core components of the intervention and the sequence of intended activities for the intervention group under study, as well as the usual practice activities for the control group.
  • Intervention delivery by providers – refers to what is actually implemented which can be affected by staff qualifications, quality, use of system functions, adaptations and performance monitoring over time, such as the use of electronic preventive care reminders.
  • Intervention uptake by participants – refers to the experience of those receiving the actual intervention that has been implemented, such as the patients who receive electronic preventive care reminders.
  • Contextual factors – refers to characteristics of the setting in which the study occurs such as socio-economic characteristics, culture, geography, legal environment and service structures.

May and colleagues (2011) have proposed a Normalization Process Theory (NPT) to explain implementation processes for complex interventions in healthcare that can be extended to eHealth systems. The NPT has four theoretical constructs aimed to illuminate the embedding of a practice through what people actually do and how they actually work. These constructs are briefly described below (May et al., 2011, p. 2).
  • Coherence – processes to understand, promote, or inhibit the intervention as a whole to its users. They require investments of meaning made by the participants.
  • Cognitive participation – processes that promote or inhibit users’ enrolment and legitimation of the intervention. They require investments of commitment by the participants.
  • Collective action – processes that promote or inhibit the enactment of the intervention by its users. They require investments of effort made by the participants.
  • Reflexive monitoring – processes that promote or inhibit the comprehension of the effects of the intervention. They require investments in appraisal made by the participants.

To translate NPT into practice, May et al. (2011) created an online survey as a Web-based toolkit to be completed by non-experts. The survey was field tested with 59 participants who responded to the questions and provided feedback to improve the content. The final version of the online survey has 16 statements where respondents can record their extent of agreement to each statement along a sliding bar from “completely agree” to “don’t agree at all”. See the Appendix for the 16 NPT statements and refer to the NPT website to access the toolkit (Normalization Process Theory [NPT], n.d.).
Mair and colleagues (2012) have conducted an explanatory systematic review to examine factors that promote or inhibit the implementation of eHealth systems using NPT as the organizing scheme. Of the 37 papers included in the review, they found there was little attention paid to: (a) work to make sense of the eHealth systems in terms of their purposes and values and to establish their value to users, and planning the implementation; (b) factors that would promote or inhibit stakeholder engagement and participation; (c) the effects on changing roles and responsibilities; (d) risk management; and (e) ways to reconfigure the implementation processes through user-produced knowledge. These findings suggest further work is needed to better understand the wider social framework and implications to be considered when introducing new technologies such as eHealth systems. The NPT may be a new and promising way to unpack the complexities associated with eHealth interventions that are currently not well addressed by traditional evaluation methods.
27.3 Guiding Principles for eHealth Evaluation Methods
There is a growing demand for governments and healthcare organizations to demonstrate the value of eHealth investments in ways that are rigorous and relevant. As such, eHealth evaluation is no longer considered an academic research activity but one that should be integral to the adoption of eHealth systems by healthcare organizations. As eHealth evaluation is increasingly being done by practitioners who may not be experienced in various evaluation approaches, there is an urgent need to ensure these evaluation studies are methodologically robust and reproducible. To explain and emphasize this need, Poon, Cusack, and McGowan (2009) have identified a set of common evaluation challenges faced by eHealth project teams funded by the Agency for Healthcare Research and Quality in the United States to deploy eHealth systems in their organizations. These were mostly non-academic institutions with project teams that had a paucity of evaluation experience. The challenges found included having: evaluation as an afterthought; unrealistic evaluation scope and inadequate resources; a mismatch between the metrics chosen and the system being imple­mented; inadequate statistical power; limited data available; an improper comparison group; insufficient details on data collection and analysis; and an exclusive focus on quantitative methods.
There have been calls for the establishment of guiding principles to make eHealth evaluation more rigorous, relevant and pragmatic. For instance, Liu and Wyatt (2011) have argued for the need for more RCTs to properly assess the impact of eHealth systems. Rather than promoting the universal use of RCTs, however, they have pointed to the need for clarity on how to match study methods to evaluation questions. Specifically, an RCT is considered appropriate if there are significant costs and risks involved, since the study can answer questions on whether and how much an eHealth system improves practitioner performance and patient outcomes. Lilford, Foster, and Pringle (2009) have advocated the use of multiple methods to examine observations at the patient and system level, as well as the use of formative and summative evaluation approaches performed as needed by internal and external evaluators during different stages of the eHealth system life cycle. Similarly, Catwell and Sheikh (2009) have suggested the need for continuous evaluation of eHealth systems as they are being designed, developed and deployed in ways that should be guided by the business drivers, vision, goals, objectives, requirements, system designs and solutions.
Greenhalgh and Russell (2010) have offered an alternative set of guiding principles for the evaluation of eHealth systems. Their principles call for a fundamental paradigm shift in thinking beyond the questions of science, beyond the focus on variables, and beyond the notions of independence and objectivity. The argument being made is that eHealth evaluation should be viewed as a form of social practice framed and enacted by engaging participants in a social situation rather than a form of scientific testing for the sole purpose of generating evidence. As such, the evaluation should be focused on the enactments, perspectives, relationships, emotions and conflicts of participants that cannot be reduced to a set of dependent and/or independent variables to explain the situation under study. It also recognizes that evaluation is inherently subjective and value-laden, which is at odds with the traditional scientific paradigm of truth seeking that is purportedly independent and objective. In particular, these authors have compared these alternative paradigms under seven key quality principles described below (Greenhalgh & Russell, 2010, Table 1, p. 3).
  • Hermeneutic circle versus statistical inference – Understanding of the situation through iterating between its different parts and the whole that they form rather than an adequately powered, statistical and representative sample from the population being studied.
  • Contextualization versus multiple interacting variables – Recognizing the importance of context, its interpretive nature and how it emerges from a particular social and historical background rather than reliance on examining the relationships of a predefined set of input, output, mediating and moderating variables.
  • Interaction and immersion versus distance – Focusing on engagement and dialogue between the evaluator and stakeholders and immersing in the socio-organizational context of the system under study rather than maintaining a clear separation for independence and objectivity.
  • Theoretical abstraction and generalization versus statistical abstraction and generation – Relating observations and interpretations into a coherent and plausible model to achieve generalizability rather than demonstrating validity, reliability and reproducibility among study variables and findings.
  • Reflexivity versus elimination of bias – Understanding how the evaluator’s background, interests and perceptions can affect the questions posed, data collected and interpretations made rather than minimizing bias through rigorous methodological designs.
  • Multiple interpretations versus single reality amenable to scientific measurement – Being open to multiple viewpoints and perspectives from different stakeholders rather than pursuing a single reality generated through robust study designs and methods.
  • Critical questioning versus empiricism – There may be hidden political influences, domination and conflicts that should be questioned and challenged rather than assuming a direct relationship between the reality and the study findings based solely on the precision and accuracy of the measurements made.

From these quality principles we can expect different types of knowledge to be generated based on the underlying paradigms that guide the evaluation effort. For instance, under the traditional scientific paradigm we can expect the evaluation to: (a) employ objective methods to generate quantitative estimates of the relationships between predefined input and output variables; (b) determine the extent to which the system has achieved its original goals and its chain of reasoning; and (c) produce quantitative statistical generalization of the findings with explanatory and predictive knowledge as the end point.
By contrast, an evaluation under an interpretive/critical paradigm would tend to: (a) co-create learning through dialogue among stakeholders to understand their expectations, values and framing of the system; (b) define the meaning of success through the struggles and compromises among stakeholder groups; and (c) provide a contextualized narrative with multiple perspectives on the system and its complexities and ambiguities (Greenhalgh & Russell, 2010, Table 2, p. 3).
27.4 A Strategic View of eHealth Evaluation
Since 2001 the Canadian federal government has invested $2.1 billion in eHealth through incremental and targeted funding allotments. Its provincial and territorial counterparts have also invested in cost-shared eHealth projects that included client and provider registries, interoperable EHRs, primary care EMRs, drug and lab information systems, diagnostic imaging systems, telehealth and consumer health. Despite such major investments, the evidence on eHealth benefits has been mixed to date (Lau, Price, & Bassi, 2014). Similarly, mixed findings are found in other countries as well. In the United Kingdom, progress toward an EHR for every patient has fallen far short of expectations, and the scope of the national programme for IT has been reduced significantly without any reduction in cost (National Audit Office [NAO], 2011). In the United States, estimated projected savings from health IT were $81 billion annually (Hillestead et al., 2005). Yet the overall results in the U.S. have been mixed. This may have been due to the sluggish adoption of eHealth systems that are neither interoperable nor easy to use, and the failure of healthcare organizations and providers to re-engineer their care processes, including provider payment schemes, in order to reap the full benefits of eHealth systems (Kellermann & Jones, 2013).
To guide eHealth policies, there is a need to expand the scope of eHealth evaluation beyond individual systems toward a more strategic view of where, how and in what ways eHealth fits into the broader healthcare system to demonstrate the overall return on value of the investments made. Kaplan and Shaw (2004) have suggested the evaluation of eHealth system success should extend beyond its technical functionality to include a mix of social, behavioural and organizational dimensions at a more strategic level that involve specific clinical contexts, cognitive factors, methods of development and dissemination, and how success is defined by different stakeholders. In order to evaluate these dimensions Kaplan and Shaw (2004, p. 215) have recommended 10 action items, which have been adapted as follows for this handbook:
  1. Address the concerns of individuals/groups involved in or affected.
  2. Conduct single and multisite studies with different scopes, types of settings and user groups.
  3. Incorporate evaluation into all phases of an eHealth project.
  4. Study failures, partial successes and changes in project definition or outcome.
  5. Employ evaluation approaches that take into account the shifting nature of healthcare and project environment, including formative evaluations.
  6. Incorporate people, social, organizational, cultural and ethical issues into the evaluation approaches.
  7. Diversify evaluation approaches and continue to develop new approaches.
  8. Conduct investigations at different levels of analysis.
  9. Integrate findings from different eHealth systems, contextual settings, healthcare domains, studies in other disciplines, and work that is not published in traditional research outlets.
  10. Develop and test theory to inform both further evaluation research and informatics practice.

In Canada, Zimlichman et al. (2012) have conducted semi-structured interviews with 29 key Canadian eHealth policy and opinion leaders on their domestic eHealth experiences and lessons learned for other countries to consider. The key findings are for eHealth leaders to emphasize the following: direct provider engagement; a clear business case for stakeholders; guidance on standards; access to resources for mid-course corrections of standards as needed; leveraging the implementation of digital imaging systems; and sponsoring large-scale evaluations to examine eHealth system impact in different contexts.
Similarly, at the 2011 American College of Medical Informatics (ACMI) Winder Symposium, a group of health informatics researchers and practitioners examined the contributions of eHealth to date by leading institutions, as well as possible paths for the nation to follow in using eHealth systems and demonstrating its value in healthcare reform (Payne et al., 2011). In terms of the role of eHealth in reducing costs and improving the quality of healthcare, the ACMI group suggested that eHealth systems can provide detailed information about healthcare, reduce costs in the care of individual patients, and support strategic changes in healthcare delivery.
To address the question of whether eHealth is worth the investment, the ACMI group have suggested the need to refocus the effort on more fundamental but strategic issues of what evidence is needed, what is meant by eHealth, what is meant by investment and how it is measured, and how we determine worth. These questions are briefly discussed below.
  • What evidence is needed? Currently we do not routinely collect the data needed to help us determine the actual costs of eHealth systems and their economic and health impacts, including any unintended consequences. To do so on a continual basis would require structural changes to our healthcare operations and data models.
  • What is meant by eHealth? We need to develop ways to articulate eHealth systems in terms of their functionality and co-factors that affect their design, deployment and use. Examples of co-factors include such areas as policies, process re-engineering, training, organization and resource restructuring, and change management. Also important is the recognition of the therapeutic dosage effect where there can be a differential impact with varying levels of eHealth system investment and adoption.
  • What is meant by investment and how it is measured? We need to clarify who is making the investment, the form of that investment and the scope of the intended impacts. These can vary from the micro level that is focused on the burden and benefits for individual providers, to the macro level with a national scope in terms of societal acceptance of eHealth and its effects. For measurement, currently there are no clear metrics for characterizing the appropriate costs and benefits that should be measured, nor are there standardized methods for measuring them.
  • How do we determine worth? While value is typically expressed in terms of dollars expended, productivity and effectiveness, we do not know what constitutes a realistic return on eHealth investments. This may depend on the initial states with respect to the level of investment made and the extent of eHealth system adopted. For example, with limited eHealth investment a healthcare organization may achieve only limited impact, whereas with a higher level of investment and broader stakeholder support one may achieve significant impact. For meaningful comparison these initial states may need to be normalized across studies and, given the small amount of evidence available to date, the focus should be on how to collect appropriate evidence in the future rather than pursuing a definitive answer on the worth of eHealth systems at this time.
27.5 Concluding Remarks
This chapter examined the future direction of eHealth evaluation in terms of its shifting landscape within the larger healthcare system, including the growing recognition of eHealth as a form of complex intervention, the need for alternate guiding principles on eHealth evaluation methods, and taking a more strategic view of eHealth evaluation as part of the larger system. This future should be built upon the cumulative knowledge acquired over many years in generating a better understanding of the role, makeup, behaviour and impact of eHealth systems through the application of rigorous methods in pragmatic evaluation studies that are relevant to multiple stakeholder groups. While there is still mixed evidence to date on the performance and impact of eHealth systems, the exemplary case studies provided throughout this handbook should offer some guidance on how leading healthcare organizations have planned, adopted and optimized their eHealth systems in order to reap tangible benefits over time.
In conclusion, the key messages for readers in terms of the future of eHealth evaluation and its implications within the larger healthcare system are summarized below.
  • eHealth evaluation as an evolving science can advance our understanding and knowledge of eHealth as complex sociotechnical interventions within the larger healthcare system. At the same time, eHealth evaluation as a social practice can generate the empirical evidence needed to link the value of eHealth to the investments made from multiple stakeholder perspectives.
  • There is a growing recognition of the need to apply theory-guided, multi-method driven and pragmatic design in eHealth evaluation that is based on best practice principles in order to build on the cumulative knowledge in health informatics.
  • There is some evidence to suggest that, under the right conditions, the adoption of eHealth systems is correlated with clinical and health system benefits. Presently this evidence is stronger in care process improvement than in health outcomes, and the positive economic return is based on only a small set of studies. The question now is not whether eHealth can demonstrate benefits, but under what conditions can these benefits be realized and maximized.

References
Catwell, L., & Sheikh, A. (2009). Evaluating eHealth interventions: The need for continuous systemic evaluation. Public Library of Science Medicine, 6(8), e1000126.
Craig, P., Dieppe, P., Macintyre, S., Michie, S., Nazareth, I., & Petticrew, M. (2008). Developing and evaluating complex interventions: new guidance. Swindon, UK: Medical Research Council. Retrieved from http://www.mrc.ac.uk/documents/pdf/complex-interventions-guidance/
Eisenstein, E. L., Lobach, D. F., Montgomery, P., Kawamoto, K., & Anstrom, K. J. (2007). Evaluating implementation fidelity in health information technology interventions. In proceedings of the American Medical Informatics Association (AMIA) annual symposium, 2007, Chicago (pp. ⁠211–215). Bethesda, MD: AMIA.
Greenhalgh, T., & Russell, J. (2010). Why do evaluations of eHealth programs fail? An alternative set of guiding principles. Public Library of Science Medicine, 7(11), e1000360. doi: ⁠10.1371/journal.pmed.1000360
Hillestad, R., Bigelow, J., Bower, A., Girosi, F., Meili, R., Scoville, R., & Taylor, R. (2005). Can electronic medical record systems transform health care? Potential health benefits, savings, and costs. Health Affairs, 24(5), 1103–⁠1117.
Kaplan, B., & Shaw, N. T. (2004). Future directions in evaluation research: people, organizational and social issues. Methods of Information in Medicine, 43(3), 215–231.
Kellermann, A. L., & Jones, S. S. (2013). What it will take to achieve the as-yet-unfulfilled promises of health information technology. Health Affairs, 32(1), 63–68.
Lau, F., Price, M., & Bassi, J. (2014). Toward a coordinated electronic record strategy for Canada. In A. Cardon, J. Dixon, & K. R. Nossal (Eds.), Toward a healthcare strategy for Canadians (pp. 111–134). Montréal: McGill-Queen’s University Press.
Lilford, R. J., Foster, J., & Pringle, M. (2009). Evaluating eHealth: How to make evaluation more methodologically robust. Public Library of Science Medicine, 6(11), e1000186.
Liu, J. L. Y., & Wyatt, J. C. (2011). The case for randomized controlled trials to assess the impact of clinical information systems. Journal of the American Medical Informatics Association, 18(2), 173–180.
Mair, F. S., May, C., O’Donnell, C., Finch, T., Sullivan, F., & Murray, E. (2012). Factors that promote or inhibit the implementation of e-health systems: an explanatory systematic review. Bulletin of World Health Organization, 90, 257–264. doi: ⁠10.2471/BLT.11.099424
May, C. R., Finch, T., Ballini, L., MacFarlane, A., Mair, F., Murray, E., Treweek, S., & Rapley, T. (2011). Evaluating complex interventions and health technologies using normalization process theory: development of a simplified approach and web-enabled toolkit. BMC Health Services Research, 11, 245. doi: ⁠10.1186/1472-6963-11-245
Montgomery, P., Underhill, K., Gardner, F., Operario, D., & Mayo-Wilson, E. (2013). The Oxford implementation index: a new tool for incorporating implementation data into systematic reviews and meta-analyses. Journal of Clinical Epidemiology, 66(8), 874–882.
National Audit Office. (2011). The national programme for IT in the NHS: an update on the delivery of detailed care records systems. London: Author. Retrieved from https://www.nao.org.uk/report/the-national-programme-for-it-in-the-nhs-an-update-on-the-delivery-of-det⁠ailed-care-records-systems/
Normalization Process Theory (NPT). (n.d.). Implementing and evaluating complex interventions. Retrieved from http://www.normalizationprocess.org/
Payne, T. H., Bates, D. W., Berner, E. S., Berstam, E. V., Covvey, H. D., Frisse, M. E., … Ozbolt, J. (2013). Healthcare information technology and economics. Journal of American Medical Informatics Association, 20(2), 212–217.
Poon, E. G., Cusack, C. M., & McGowan, J. J. (2009). Evaluating healthcare information technology outside of academia: observations from the National Resource Centre for healthcare information technology at the Agency for Healthcare Research and Quality. Journal of American Medical Informatics Association, 16(5), 631–636.
Shcherbatykh, I., Holbrook, A., Thabane, L., & Dolovich, L. (2008). Methodologic issues in health informatics trials: the complexities of complex interventions. Journal of American Medical Informatics Association, 15(5), 575–580.
Zimlichman, E., Rozenblum, R., Salzberg, C. A., Jang, Y., Tamblyn, M., Tamblyn, R., & Bates, D. W. (2012). Lessons from the Canadian national health information technology plan for the United States: opinions of key Canadian experts. Journal of the American Medical Informatics Association,19(3), 453–459.

Appendix
NPT Survey Statements
(Source: May et al., 2011, pp. 8–9)
  1. Participants distinguish the intervention from current ways of working
  2. Participants collectively agree about the purpose of the intervention
  3. Participants individually understand what the intervention requires of them
  4. Participants construct potential value of the intervention for their work
  5. Key individuals drive the intervention forward
  6. Participants agree that the intervention should be part of their work
  7. Participants buy into the intervention
  8. Participants continue to support the intervention
  9. Participants perform the tasks required by the intervention
  10. Participants maintain their trust in each other’s work and expertise through the intervention
  11. The work of the intervention is allocated appropriately to participants
  12. The intervention is adequately supported by its host organization
  13. Participants access information about the effects of the intervention
  14. Participants collectively assess the intervention as worthwhile
  15. Participants individually assess the intervention as worthwhile
  16. Participants modify their work in response to their appraisal of the intervention

Annotate

Next Chapter
Glossary
PreviousNext
EPUB
Powered by Manifold Scholarship. Learn more at
Opens in new tab or windowmanifoldapp.org