Skip to main content

Handbook of eHealth Evaluation: An Evidence-based Approach: Chapter 1: Need for Evidence, Frameworks and Guidance

Handbook of eHealth Evaluation: An Evidence-based Approach
Chapter 1: Need for Evidence, Frameworks and Guidance
  • Show the following:

    Annotations
    Resources
  • Adjust appearance:

    Font
    Font style
    Color Scheme
    Light
    Dark
    Annotation contrast
    Low
    High
    Margins
  • Search within:
    • Notifications
    • Privacy
  • Project HomeHandbook of eHealth Evaluation: An Evidence Based Approach
  • Projects
  • Learn more about Manifold

Notes

table of contents
  1. Cover
  2. Half Title Page
  3. Title and Copyright
  4. Contents
  5. List of Tables and Figures
  6. Preface
  7. Acknowledgements
  8. Introduction
    1. What is eHealth? by Francis Lau, Craig Kuziemsky
    2. What is eHealth Evaluation? by Francis Lau, Craig Kuziemsky
    3. What is in this Handbook? by Francis Lau, Craig Kuziemsky
  9. Part I: Conceptual Foundations
    1. 1. Need for Evidence, Frameworks and Guidance by Francis Lau
    2. 2. Benefits Evaluation Framework by Francis Lau, Simon Hagens, Jennifer Zelmer
    3. 3. Clinical Adoption Framework by Francis Lau, Morgan Price
    4. 4. Clinical Adoption Meta-Model by Morgan Price
    5. 5. eHealth Economic Evaluation Framework by Francis Lau
    6. 6. Pragmatic Health Information Technology Evaluation Framework by Jim Warren, Yulong Gu
    7. 7. Holistic eHealth Value Framework by Francis Lau, Morgan Price
  10. Part II: Methodological Details
    1. 8. Methodological Landscape for eHealth Evaluation by Craig Kuziemsky, Francis Lau
    2. 9. Methods for Literature Reviews by Guy Paré, Spyros Kitsiou
    3. 10. Methods for Comparative Studies by Francis Lau, Anne Holbrook
    4. 11. Methods for Descriptive Studies by Yulong Gu, Jim Warren
    5. 12. Methods for Correlational Studies by Francis Lau
    6. 13. Methods for Survey Studies by Francis Lau
    7. 14. Methods for eHealth Economic Evaluation Studies by Francis Lau
    8. 15. Methods for Modelling and Simulation Studies by James G. Anderson, Rong Fu
    9. 16. Methods for Data Quality Studies by Francis Lau
    10. 17. Engaging in eHealth Evaluation Studies by Craig Kuziemsky, Francis Lau
  11. Part III: Selected eHealth Evaluation Studies
    1. 18. Value for Money in eHealth: Meta-synthesis of Current Evidence by Francis Lau
    2. 19. Evaluation of eHealth System Usability and Safety by Morgan Price, Jens Weber, Paule Bellwood, Simon Diemert, Ryan Habibi
    3. 20. Evaluation of eHealth Adoption in Healthcare Organizations by Jim Warren, Yulong Gu
    4. 21. Evaluation of Picture Archiving and Communications Systems by Don MacDonald, Reza Alaghehbandan, Doreen Neville
    5. 22. Evaluation of Provincial Pharmacy Network by Don MacDonald, Khokan C. Sikdar, Jeffrey Dowden, Reza Alaghehbandan, Pizhong Peter Wang, Veeresh Gadag
    6. 23. Evaluation of Electronic Medical Records in Primary Care: A Case Study of Improving Primary Care through Information Technology by Lynne S. Nemeth, Francis Lau
    7. 24. Evaluation of Personal Health Services and Personal Health Records by Morgan Price, Paule Bellwood, Ryan Habibi, Simon Diemert, Jens Weber
    8. 25. Evaluating Telehealth Interventions by Anthony J. Maeder, Laurence S. Wilson
  12. Part IV: Future Directions
    1. 26. Building Capacity in eHealth Evaluation: The Pathway Ahead by Simon Hagens, Jennifer Zelmer, Francis Lau
    2. 27. Future of eHealth Evaluation: A Strategic View by Francis Lau
  13. Glossary
  14. About the Contributors

Chapter 1
Need for Evidence, Frameworks and Guidance
Francis Lau
1.1 Introduction
Over the years, a variety of countries and subnational jurisdictions have made significant investments in eHealth systems with the expectation that their adoption can lead to dramatic improvements in provider performance and health outcomes. With this increasing movement toward eHealth systems there is a consequent need for empirical evidence to demonstrate there are tangible benefits produced from these systems. Such evidence is imporant to establish the return on investment and value, as well as to guide future eHealth investment and adoption decisions.
Thus far the evidence on tangible eHealth benefits has been mixed. In light of these conflicting results, conceptual frameworks are needed as organizing schemes to help make sense of the evidence on eHealth benefits. In particular, it is important to appreciate the underlying assumptions and motivations governing an evaluation and its findings so that future eHealth investment and adoption decisions can be better informed. Along with the need for conceptual frameworks to make sense of the growing eHealth evidence base, there is also an increasing demand to provide best practice guidance in eHealth evaluation approaches to ensure there is both rigour and relevance in the planning, conduct, reporting and appraisal of eHealth evaluation studies.
This chapter describes the challenges associated with eHealth evaluation, and the need for empirical evidence, conceptual frameworks and practice guidance to help us make sense of eHealth evaluation. Six different frameworks that constitute the remaining chapters in Part I of this handbook are then outlined.
1.2 Evaluation Challenges
There are three types of challenges to be considered when navigating the eHealth evaluation landscape. These are the definition of eHealth itself, one’s perspective of eHealth systems, and the approaches used to study eHealth systems. These challenges are elaborated below.
1.2.1 The Challenge of Definition
The field of eHealth is replete with jargons, acronyms and conflicting descriptions that can be incomprehensible to the uninitiated. For instance, eHealth is defined by some countries as the application of Information and Communication Technology (ICT) in health. It is a term often seen in the Canadian and European literature. On the other hand, Health Information Technology (HIT) is also a term used to describe the use of ICT in health especially in the United States. The terms EHR (Electronic Health Record) and EMR (Electronic Medical Record) can have different meanings depending on the countries in which they are used. In the United States, EHR and EMR are used interchangeably to mean electronic records that store patient data in health organizations. However, in Canada EMR refers specifically to electronic patient records in a physician’s office.
The term EHR can also be ambiguous as to what it contains. According to the Institute of Medicine, an EHR has four core functions: health information, data storage, order entry (i.e., computerized provider/physician order entry, or CPOE), results management, and decision support (Blumenthal et al., 2006). Sometimes it may also include patient support, electronic communication and reporting, and population health management. Even CPOE can be ambiguous as it may or may not include decision support functions. The challenge with eHealth definitions, then, is that there are often implicit, multiple and conflicting meanings. Thus, when reviewing the evidence on eHealth design, adoption and impacts, one needs to understand what eHealth system or function is involved, how it is defined, and where and how it is used.
1.2.2 The Challenge of Perspective
The type of eHealth system and/or function being evaluated, the health setting involved, and the evaluation focus are important considerations that influence how various stakeholders perceive a system with respect to its purpose, role and value. Knowing the eHealth system and/or function involved – such as a CPOE with clinical decision support (CDS) – is important as it identifies what is being evaluated. Knowing the health setting is important since it embodies the type of care and services, as well as organizational practices, that influence how a system is adopted. Knowing the focus is to reduce medication errors with CDS is important as it identifies the value proposition being evaluated. Often the challenge with eHealth perspective is that the descriptions of the system, setting and focus are incomplete in the evaluation design and reporting. This lack of detail makes it difficult to determine the significance of the study findings and their relevance to one’s own situation. For example, in studies of CPOE with CDS in the form of automated alerts, it is often unclear how the alerts are generated, to whom they are directed, and whether a response is required. For a setting such as a primary care practice it is often unclear whether the site is a hospital outpatient department, a community-based clinic or a group practice. Some studies focus on such multiple benefit measures as provider productivity, care coordination and patient safety, which render it difficult to decide whether the system has led to an overall benefit. It is often left up to the consumer of evaluation study findings to tease out such detail to determine the importance, relevance and applicability of the evidence reported.
1.2.3 The Challenge of Approach
A plethora of scientific, psychosocial and business approaches have been used to evaluate eHealth systems. Often the philosophical stance of the evaluator influences the approach chosen. On one end of the spectrum there are experimental methods such as the randomized controlled trial (RCT) used to compare two or more groups for quantifiable changes from an eHealth system as the intervention. At the other end are descriptive methods such as case studies used to explore and understand the interactions between an eHealth system and its users. The choice of benefit measures selected, the type of data collected and the analytical techniques used can all affect the study results. In contrast to controlled studies that strive for statistical and clinical significance in the outcome measures, descriptive studies offer explanations of the observed changes as they unfold in the naturalistic setting. In addition, there are economic evaluation methods that examine the relationships between the costs and return of an investment, and simulation methods that model changes based on a set of input parameters and analytical algorithms.
The challenge, then, is that one needs to know the principles behind the different approaches in order to plan, execute, and appraise eHealth evaluation studies. Often the quality of these studies varies depending on the rigour of the design and the method applied. Moreover, the use of different outcome measures can make it difficult to aggregate findings across studies. Finally, the timing of studies in relation to implementation and use will influence impacts which may or may not be realized during the study period due to time lag effects.
1.3 Making Sense of eHealth Evaluation
The growing number of eHealth systems being deployed engenders a growing need for new empirical evidence to demonstrate the value of these systems and to guide future eHealth investment and adoption decisions. Conceptual frameworks are needed to help make sense of the evidence produced from eHealth evaluation studies. Practice guidance is needed to ensure these studies are scientifically rigorous and relevant to practice.
1.3.1 The Need for Evidence
The current state of evidence on eHealth benefits is diverse, complex, mixed and even contradictory at times. The evidence is diverse since eHealth evaluation studies are done on a variety of topics with different perspectives, contexts, purposes, questions, systems, settings, methods and measures. It is complex as the studies often have different foci and vary in their methodological rigour, which can lead to results that are difficult to interpret and generalize to other settings. The evidence is often mixed in that the same type of system can have either similar or different results across studies. There can be multiple results within a study that are simultaneously positive, neutral and negative. Even the reviews that aggregate individual studies can be contradictory for a given type of system in terms of its overall impacts and benefits.
To illustrate, a number of Canadian eHealth evaluation studies have reported notable benefits from the adoption of EMR systems (O’Reilly, Holbrook, Blackhouse, Troyan, & Goeree, 2012) and drug information systems (Fernandes et al., 2011; Deloitte, 2010). Yet in their 2009-2010 performance audit reports, the Auditor General of Canada and six provincial auditors offices raised questions on whether there was sufficient value for money on Canadian EHR investments (Office of the Auditor General of Canada [OAG], 2010). Similar mixed findings appear in other countries. In the United Kingdom, progress toward an EHR for every patient has fallen short of expectations, and the scope of the National Programme for IT has been reduced significantly in recent years but without any reduction in cost (National Audit Office [NAO], 2011). In the United States, early 21st century savings from health IT were projected to be $81 billion annually (Hillestead et al., 2005). Yet overall results in the U.S. have been mixed thus far. Kellerman and Jones (2013) surmised the causes to be a combination of sluggish health IT adoption, poor interoperability and usability, and an inability of organizations to re-engineer their care processes to reap the available benefits. Others have argued the factors that lead to tangible eHealth benefits are highly complex, context-specific and not easily transferable among organizations (Payne et al., 2013).
Despite the mixed findings observed to date, there is some evidence to suggest that under the right conditions, the adoption of eHealth systems are correlated with clinical and health system benefits, with notable improvements in care process, health outcomes and economic return (Lau, Price, & Bassi, 2015). Presently this evidence is stronger in care process improvement than in health outcomes, and the positive economic return is only based on a small set of published studies. Given the current societal trend toward an even greater degree of eHealth adoption and innovation in the foreseeable future, the question is no longer whether eHealth can demonstrate benefits, but under what circumstances can eHealth benefits be realized and how should implementation efforts be applied to address factors and processes that maximize such benefits.
1.3.2 The Need for Frameworks
In light of the evaluation challenges described earlier, some type of organizing scheme is needed to help make sense of eHealth systems and evaluation findings. Over the years, different conceptual frameworks have been described in the health informatics and information systems literature. For example, Kaplan (2001) advocated the use of such social and behavioural theories as social interactionism to understand the complex interplay of ICT within specific social and organizational contexts. Orlikowski and Iacono (2001) described the nominal, computational, tool, proxy and ensemble views as different conceptualizations of the ICT artefact in the minds of those involved with information systems.
In their review of evaluation frameworks for health information systems, Yusof, Papazafeiropoulou, Paul, and Stergioulas (2008) identified a number of evaluation challenges, examples of evaluation themes, and three types of frameworks that have been reported in eHealth literature. For evaluation challenges, one has to take into account the why, who, when, what and how questions upon undertaking an evaluation study:
  •  Why refers to the purpose of the evaluation.
  • Who refers to the stakeholders and perspectives being represented.
  • When refers to the stage in the system adoption life cycle.
  • What refers to the type of system and/or function being evaluated.
  • How refers to the evaluation methods used.

For evaluation themes, examples of topics covered include reviews of the impact of clinical decision support systems (CDSS) on physician performance and patient outcomes, the importance of human factors in eHealth system design and implementation, and human and socio-organizational aspects of eHealth adoption. The three types of evaluation frameworks reported were those based on generic factors, system development life cycle, and sociotechnical systems. Examples of generic factors are those related to the eHealth system, its users and the social-functional environment. Examples of system development life cycle are the stages of exploration, validity, functionality and impact. Examples of sociotechnical systems are the work practices of such related network elements as people, organizational processes, tools, machines and documents.
It can be seen that the types of conceptual frameworks reported in the eHealth literature vary considerably in terms of their underlying assumptions, purpose and scope, conceptual dimensions, and the level and choice of measures used. In this context, underlying assumptions are the philosophical stance of the evaluator and his or her worldview (i.e., subjective versus objective). Purpose and scope are the intent of the framework and the health domain that it covers. Conceptual dimensions are the components and relationships that make up the framework. Level and choice of measures are the attributes that are used to describe and quantify the framework dimensions. Later in this chapter, six examples of conceptual frameworks from the eHealth literature are introduced that have been used to describe, understand and explain the technical, human and organizational dimensions of eHealth systems and their sociotechnical consequences. These frameworks are then described in detail in Part I of this handbook.
1.3.3 The Need for Guidance
The term “evidence-based health informatics” first appeared in 1990s as part of the evidence-based medicine movement. Since that time, different groups have worked to advance the field by incorporating the principle of evidence-based practice into their health informatics teaching and learning. Notable efforts included the working groups of the University for Health Sciences, Medical Informatics and Technology (UMIT), International Medical Informatics Ass­oci­ation (IMIA), and European Federation of Medical Informatics (EFMI), with their collective output called the Declaration of Innsbruck that laid the foundation of evidence-based health informatics and eHealth evaluation as a recognized and growing area of study (Rigby et al., 2013).
While much progress has been made thus far, Ammenwerth (2015) detailed a number of challenges that still remain. These include the quality of evaluation studies, publication biases, the reporting quality of evaluation studies, the identification of published evaluation studies, the need for systematic reviews and meta-analyses, training in eHealth evaluation, the translation of evidence into practice and post-market surveillance. From the challenges identified by this author, it is clear that eHealth evaluation practice guidance is needed in multiple areas and at multiple levels. First, guidance on multiple evaluation approaches is needed to examine the planning, design, adoption and impact of the myriad of eHealth systems that are available. Second, guidance is needed to ensure the quality of the evaluation study findings and reporting. Third, guidance is needed to educate and train individuals and organizations in the science and practice of eHealth evaluation.
In this regard, the methodological actions of the UMIT-IMIA-EFMI working groups that followed their Declaration of Innsbruck have been particularly fruitful in moving the field of eHealth evaluation forward (Rigby et al., 2013). These actions include the introduction of guidelines for good eHealth evaluation practice, standards for reporting of eHealth evaluation studies, an inventory of eHealth evaluation studies, good eHealth evaluation curricula and training, systematic reviews and meta-analyses of eHealth evaluation studies, usability guidelines for eHealth applications, and performance indicators for eHealth interventions. In aggregation, all of these outputs are intended to increase the rigour and relevance of eHealth evaluation practice, promote the generation and reporting of empirical evidence on the value of eHealth systems, and increase the intellectual capacity in eHealth evaluation as a legitimate field of study. In Part II of this handbook, different approaches from the eHealth literature that have been applied to design, conduct, report and appraise eHealth evaluation studies are described.
1.4 The Conceptual Foundations
In Part I of this handbook, the chapters that follow describe six empirical frameworks that have been used to make sense of eHealth systems and their evaluation. These frameworks serve a similar purpose in that they provide an org­an­izing scheme or mental roadmap for eHealth practitioners to conceptualize, describe and predict the factors and processes that influence the design, implementation, use and effect of eHealth systems in a given health setting. At the same time, these frameworks are different from each other in terms of their scope, the factors and processes involved, and their intended usage. The six frameworks covered in chapters 2 through 7 are introduced below.
  • Benefits Evaluation (BE) Framework (Lau, Hagens, & Muttitt, 2007) – This framework describes the success of eHealth system adoption as being dependent on three conceptual dimensions: the quality of the information, technology and support; the degree of its usage and user satisfaction; and the net benefits in terms of care quality, access and productivity. Note that in this framework, organizational and contextual factors are considered out of scope.  
  • Clinical Adoption (CA) Framework (Lau, Price, & Keshavjee, 2011) – This framework extends the BE Framework to include organizational and contextual factors that influence the overall success of eHealth system adoption in a health setting. This framework has three conceptual dimensions made up of micro-, meso- and macro-level factors, respectively. The micro-level factors are the elements described in the BE Framework. The meso-level factors refer to elements related to people, organization and implementation. The macro-level factors refer broadly to elements related to policy, standards, funding and trends in the environment.
  • Clinical Adoption Meta-Model (CAMM) (Price & Lau, 2014) – This framework provides a dynamic process view of eHealth system adoption over time. The framework is made up of four conceptual dimensions of availability, use, behaviour and outcomes. The basic premise is that for successful adoption to occur the eHealth system must first be made available to those who need it. Once available, the system has to be used by the intended users as part of their day-to-day work. The ongoing use of the system should gradually lead to observable behavioural change in how users do their work. Over time, the behavioural change brought on by ongoing use of the system by users should produce the intended change in health outcomes.
  • eHealth Economic Evaluation Framework (Bassi & Lau, 2013) – This framework provides an organizing scheme for the key elements to be considered when planning, conducting, reporting and appraising eHealth economic evaluation studies. These framework elements cover perspective, options, time frame, costs, outcomes and analysis of options. Each element is made up of a number of choices that need to be selected and defined when describing the study.
  • Pragmatic HIT Evaluation Framework (Warren, Pollock, White, & Day, 2011) – This framework builds on the BE Framework and a few others to explain the factors and processes that influence the overall success of eHealth system adoption. The framework is multidimensional and adaptive in nature. The multidimensional aspect ensures the inclusion of multiple viewpoints and measures, especially from those who are impacted by the system. The adaptive aspect allows an iterative design where one can reflect on and adjust the evaluation design and measures as data are being collected and analyzed over time. The framework includes a set of domains called criteria pool made up of a number of distinct factors and processes for considerations when planning an evaluation study. These criteria are work and communication patterns, organizational culture, safety and quality, clinical effectiveness, IT system integrity, usability, vendor factors, project management, participant experience and leadership, and governance.
  • Holistic eHealth Value Framework (Lau, Price, & Bassi, 2015) – This framework builds on the BE, CA and CAMM Frameworks by incorporating their key elements into a higher-level conceptual framework for defining eHealth system success. The framework is made up of the conceptual dimensions of investment, adoption, value and lag time, which interact with each other dynamically over time to produce specific eHealth impacts and benefits. The investment dimension has factors related to direct and indirect investments. The adoption dimension has micro-, meso- and macro-level factors described in the BE and CA Frameworks. The value dimension is conceptualized as a two-dimensional table with productivity, access and care quality in three rows and care process, health outcomes and economic return in three columns. The lag time dimension has adoption lag time and impact lag time, which take into account the time needed for the eHealth system to be implemented, used and to produce the intended effects.
1.5 Summary
This chapter explained the challenges in eHealth evaluation and the need for empirical evidence, conceptual frameworks and practice guidance to make sense of the field. The six frameworks used in eHealth evaluation that are the topics in the remaining chapters of Part I of this handbook were then introduced.
References
Ammenwerth, E. (2015). Evidence-based health informatics: How do we know what we know? Methods of Information in Medicine, 54(4), 298–307.
Bassi, J., & Lau, F. (2013). Measuring value for money: A scoping review on economic evaluation of health information systems. Journal of American Medical Informatics Association,20(4), 792–801.
Blumenthal, D., DesRoches, C., Donelan, K., Ferris, T., Jha, A., Kaushal, R., … Shield, A. (2006). Health information technology in the United States: the information base for progress. Princeton, NJ: Robert Wood Johnson Foundation.
Deloitte. (2010). National impacts of generation 2 drug information systems. Technical Report, September 2010. Toronto: Canada Health Infoway. Retrieved from https://www.infoway-inforoute.ca/index.php/en/component/edocman/resources/reports/331-national-impact-of-generation-2-drug-information-systems-technical-report
Fernandes, O. A., Lee, A. W., Wong, G., Harrison, J., Wong, M., & Colquhoun, M. (2011). What is the impact of a centralized provincial drug profile viewer on the quality and efficiency of patient admission medication reconciliation? A randomized controlled trial. Canadian Journal of Hospital Pharmacy, 64(1), 85.
Hillestad, R., Bigelow, J., Bower, A., Girosi, F., Meili, R., Scoville, R., & Taylor, R. (2005). Can electronic medical record systems transform health care? Potential health benefits, savings, and costs. Health Affairs, 24(5), 1103–⁠1117.
Kaplan, B. (2001). Evaluating informatics applications — some alternative approaches: theory, social interactionism, and call for methodological pluralism. International Journal of Medical Informatics,64(1), 39–58.
Kellerman, A. L., & Jones, S. S. (2013). What it will take to achieve the as-yet-unfulfilled promises of health information technology. Health Affairs, 32(1), 63–68.
Lau, F., Hagens, S., & Muttitt, S. (2007). A proposed benefits evaluation framework for health information systems in Canada. Healthcare Quarterly, 10(1), 112–118.
Lau, F., Price, M., & Keshavjee, K. (2011). From benefits evaluation to clinical adoption: Making sense of health information system success in Canada. Healthcare Quarterly, 14(1), 39–45.
Lau, F., Price, M., & Bassi, J. (2015). Toward a coordinated electronic health record (EHR) strategy for Canada. In A. S. Carson, J. Dixon, & K. R. Nossal (Eds.), Toward a healthcare strategy for Canadians (pp. 111–134). Kingston, ON: McGill-Queens University Press.
National Audit Office. (2011). The national programme for IT in the NHS: an update on the delivery of detailed care records systems. London: Author. Retrieved from https://www.nao.org.uk/report/the-national-programme-for-it-in-the-nhs-an-update-on-the-delivery-of-detailed-care-records-systems/
Office of the Auditor General of Canada [OAG]. (2010, April). Electronic health records in Canada – An overview of federal and provincial audit reports. Ottawa: Author. Retrieved from http://www.oag-bvg.gc.ca/internet/docs/parl_oag_201004_07_e.pdf
O’Reilly, D., Holbrook, A., Blackhouse, G., Troyan, S., & Goeree, R. (2012). Cost-effectiveness of a shared computerized decision support system for diabetes linked to electronic medical records. Journal of the American Medical Informatics Association,19(3), 341–345.
Orlikowski, W. J., & Iacono, C. S. (2001). Research commentary: Desperately seeking the “IT” in IT research – A call to theorizing the IT artefact. Information Systems Research,12(2), 121–134.
Payne, T. H., Bates, D. W., Berner, E. S., Bernstam, E. V., Covvey, H. D., Frisse, M. E., … Ozbolt, J. (2013). Healthcare information technology and economics. Journal of the American Medical Informatics Association, 20(2), 212–217.
Price, M., & Lau, F. (2014). The clinical adoption meta-model: a temporal meta-model describing the clinical adoption of health information systems. BMC Medical Informatics and Decision Making,14, 43. Retrieved from http://www.biomedical.com/1472-6947/14/43
Rigby, M., Ammenwerth, E., Beuscart-Zephir, M.- C., Brender, J., Hypponen, H.,  Melia, S., Nykänen, P., Talmon, J., & de Keizer, N. (2013). Evidence-based health informatics: 10 years of efforts to promote the principle. IMIA Yearbook of Medical Informatics,2013, 34–46.
Warren, J., Pollock, M., White, S., & Day, K. (2011). Health IT evaluation framework. Wellington, NZ: Ministry of Health.
Yusof, M. M., Papazafeiropoulou, A., Paul, R. J., & Stergioulas, L. K. (2008). Investigating evaluation frameworks for health information systems. International Journal of Medical Informatics,77(6), 377–385.

Annotate

Next Chapter
Chapter 2: Benefits Evaluation Framework
PreviousNext
EPUB
Powered by Manifold Scholarship. Learn more at
Opens in new tab or windowmanifoldapp.org