Skip to main content
Notes
table of contents
Chapter 26
Building Capacity in eHealth Evaluation: The Pathway Ahead
Simon Hagens, Jennifer Zelmer, Francis Lau
26.1 Introduction
While much progress has been made in recent years, accommodating the growing
demand for evidence relating to eHealth will require a continued focus on
capacity building. Similar interest in evaluation capacity building extends
into other disciplines and, as a consequence, has been the focus of discussion
in the literature.
Labin, Duffy, Meyers, Wandersman, and Lesesne (2012) define evaluation capacity
building as an “intentional process to increase individual motivation, knowledge and skills, and
to enhance a group or organization’s ability to conduct or use evaluation” (p. 308). For the purpose of this discussion, the focus will be on the broader
health system’s ability to conduct or use evaluation related to digital solutions. Preskill
and Boyle (2008) describe the goal of evaluation capacity building as being “where members continuously ask questions that matter, collect, analyze, and
interpret data, and use evaluation findings for decision-making and action” (p. 448). They go on to describe essential inputs including leadership support,
incentives, resources and opportunities to transfer learning (Preskill & Boyle, 2008). This is consistent with the themes emerging from the capacity
building experience in eHealth.
26.2 Motivation for Benefits Evaluation and Benefits Realization
Evaluation is a core component of an overall approach to benefits realization
(Hagens, 2009). Clear and specific articulation of the benefits being targeted
is an important starting point. With this step, expectations can be set and the
mobilization of required participants can begin. A next step is identification
of key assumptions or conditions necessary for benefits to materialize, and
action required to address them. These actions may be many and varied. Examples
include decision support, user interface considerations, workflow or other
process redesign, policy or practice change, or approaches to harvesting
quality or productivity gains. A structured change management methodology can
help ensure success. As part of this process, measurement against objectives
allows for the opportunity to adapt and adjust based on the findings on an
ongoing basis, thereby improving results. Information to manage course
corrections and subsequent steps is always required. Stakeholders and funders
will also want to know the value produced and have other accountability
considerations addressed.
The most effective evaluations are managed with the end in mind, informed by the
stakeholders who have the ability to apply the findings. Ideally, evaluation
work will meet the needs of multiple stakeholders, and thus consider multiple
perspectives. As discussed, funders and decision-makers are an important
audience. Clinicians and other staff in clinical settings will also be
interested. Evaluation can inform clinicians of progress achieved and help them
get the most out of investments. Evidence to inform optimization of benefits is
also critical for implementers and vendors. Academia supports knowledge
translation through teaching and encourages rigour and quality of methods and
analysis. The varied interests of stakeholder groups require consideration in
the design and execution of evaluation. Meeting the needs of all stakeholders
may require trade-offs. For example, formative or process evaluation can
heavily inform adoption and optimization. Summative or outcome evaluation is
required to effectively assess value.
As a result, many stakeholders need to have skills and assets to contribute to
the evaluation process, but not all have an equal capacity to do so. Capacity
to design, execute, and be responsive to evaluations is a top capacity need.
Academia contributes substantially to addressing capacity needs and can be
effective at publishing and communicating findings. Clinicians, health sector
leaders, implementers, the vendor community, internal and external evaluators,
and training providers can also play important roles. With growing needs for
these skill sets, there is an opportunity for greater participation by all.
Effective evaluation also requires the focused engagement of those involved in
digital health initiatives from users to implementation teams to leadership.
For instance, time and support is needed to co-design evaluation frameworks,
gain approvals, contribute insights, facilitate data collection, provide other
input, and respond to evaluation findings.
26.3 The Foundation
Encouraging and supporting capacity development is best built upon a foundation
of tried and tested frameworks, tools and processes. There are a number of
cross-sector structured approaches that have been applied to support benefits
realization for digital health, such as Val IT (see http://www.isaca.org/knowledge-center/val-it-it-value-delivery-/pages/val-it1.aspx), Prosci (see https://www.prosci.com/) and value chains.
There are also a number of tools that have been tailored to the health sector’s needs. Some of important contributors to this body of knowledge and resources
are discussed in greater depth in chapter 1 of this handbook.
In Canada, digital health-related resources have been developed by a variety of
individuals and organizations. For instance, Canada Health Infoway’s Benefit Evaluation Framework (discussed in detail in chapter 2) provides a
high-level, coherent, evidence-based model to guide discussion of benefits and
evaluation approaches (Lau, Hagens, & Muttitt, 2007). This framework, along with sets of indicators that focus on
various types of digital health, have been regularly used to support
measurement as part of a benefits realization cycle. The broader Clinical
Adoption Framework is also a useful reference to consider the range of inputs
influencing success (Lau, Price, & Keshavjee, 2011). Likewise, the Newfoundland and Labrador Centre for Health
Information (NLCHI, n.d.) produced a range of materials including an evaluation framework and a
series of successful evaluations as examples. Faculty at the University of
Victoria’s School of Health Information Science have also been productive, with a series
of eHealth evaluation frameworks and tools through the jointly funded CIHR/Infoway eHealth Observatory (Lau, n.d.). In addition, a number of open and
proprietary evaluation tools and frameworks are offered by solution vendors,
consulting firms, and think tanks.
Internationally, there have also been many contributions. A notable health IT evaluation framework and toolkit was produced by the United States Agency for
Healthcare Research and Quality (AHRQ, n.d.) to support their demonstration projects (Cusack & Poon, 2007). It was informed by some of the groundbreaking U.S. research which began emerging a number of decades ago. Another important
contribution comes from the Organisation for Economic Cooperation and
Development (OECD), which has been developing benchmark measures to allow comparison and
knowledge sharing (OECD, 2013). They cover four major domains: provider-centric electronic records,
patient-centric electronic records and services, health information exchange,
and telehealth. While there are challenges with differing terminology and
approaches to eHealth across countries, the OECD effort is proving important for supporting cross-national benchmarking and
efforts by countries to enhance digital health measurement (Adler-Milstein,
Ronchi, Cohen, Winn, & Jha, 2014).
Important foundational outputs of the work of organizations such as those
discussed above also include practical tools to assist with conducting
evaluations. The System & Use Assessment survey developed by Canada Health Infoway and its partners is
one such example. It has been extensively applied across Canada over the last decade (Infoway, 2006,
2012). There are many other similar examples of well-tested tools to make
collection and interpretation of data easier for organizations building
capacity.
Virtual communities have been important for sharing all of these resources, as
well as the experiences of those involved. They also provide a forum to build
fruitful relationships, such as connecting experienced evaluators with those in
need of support.
While these resources provide a helpful starting point for those embarking on
eHealth evaluation, there is an ongoing need for development and evolution. The
rapid change of technology and its application in healthcare requires
development of evaluation methodologies and tools to keep pace. Similarly, the
growing demand for evidence to inform decision-making requires evaluation
approaches aligned to evolving priorities and questions being posed. The
increasing digitization of health has generated new data sources with
substantial potential to improve evaluation options, as well as broadly inform
the health system. Seizing this opportunity, however, takes careful planning
and cooperation.
26.4 Approaches for Building Capacity
Just as the contributions to the knowledge base came from many different
stakeholder groups, evaluation capacity building has come from across the
sector, through leveraging evaluation expertise and capacity developed in other
domains.
Basic undergraduate education through universities and colleges, a traditional
approach for capacity building, has been impactful. While the University of
Victoria offered the first health informatics program in Canada, there are now
over 10 that train undergraduate students in the fundamentals of eHealth and
its implications on the health system, and several that produce experts through
their graduate programs. Fewer have courses specifically dedicated to
evaluation.
More broadly, Canada’s faculties of medicine, nursing, and pharmacy have undertaken specific
initiatives to focus on how to better prepare students to practice in modern,
technology-enabled, clinical environments (Baker, Charlebois, Lopatka, Moineau,
& Zelmer, 2016). Supported by Infoway, the specific goals of this program were
to:
- Ensure that clinicians-in-training are ready to practice in, and gain value from, an ICT-enabled environment when they graduate; and
- Integrate concepts and expectations related to the use of ICT in practice into curricula design and educational processes.
In a number of cases, these efforts include embedding competencies related to
evaluation in health professional undergraduate education. Continuing education
through academia and other education providers is also essential, as many
professionals seek core skills to embark on evaluation work or to enrich their
knowledge in key areas.
Recognizing the importance of capacity building and the critical role of
academia, the Canadian Institutes of Health Research (CIHR) and Infoway partnered in 2008 to offer a five year CIHR-Infoway Chair in eHealth. This award, won by Francis Lau of the University of
Victoria, proved a successful example of targeted funding making a significant
impact, with outputs including some of the frameworks, publications, and
communities referenced earlier (Lau, 2014).
Practical experience in undertaking and addressing the findings of evaluations
is also important for building capacity, particularly given that the volume of
evaluation activity has increased in recent years. This growth parallels the
rise in evaluation in the public sector as a whole. Many investments in eHealth
today have an explicit requirement for measurement, be it around the implementation process, the change effort, the adoption and/or the
impacts. This was not previously the norm, but more sophisticated approaches to
project delivery and an increasing demand for evidence-informed decisions has
changed the expectations. With greater funding and attention from leadership,
implementers have sought out evaluators from academia and the private sector,
and often take the opportunity to grow in-house capabilities. Arguably, the
most effective work comes from collaboration between these groups, matching
those in a position to shape evaluations and generate knowledge with those who
are in a position to apply the findings.
Growth in the volume of evaluation activity has required investments of
financial, human, and other resources. Granting agencies have an important role
in this area. CIHR has made some very important contributions over the past decade, with eHealth
an explicit focus of a number of grant competitions and knowledge translation
activities. Embedding evaluation as part of project plans and budgets is also
increasingly common. Organizations delivering eHealth solutions are now more
likely to require evaluation as a deliverable, and are able to budget for it and engage skilled internal or external evaluators to support the work.
26.5 Approaches for Knowledge Translation and Benefits Realization
As important as increasing the capacity for conducting evaluations is increasing
the application of findings. The Canada Health Infoway Benefits Evaluation
Framework focuses on three purposes: accountability, informing clinicians and
other digital health users, and driving benefits realization.
Accountability for investments made is increasingly important in the public
sector and has been an important driver of expanded evaluation and performance
management practices in Canada. Methodologies and reporting approaches must be
tailored for this purpose. Clinicians, steeped in a culture of
evidence-informed practice, similarly expect evidence to shape digital health
design, implementation, and adoption, as well as its effective integration into
clinical practice.
This includes supporting evidence-informed strategic planning and
implementation. All stakeholders involved in implementation can benefit from
evidence to inform optimization and realization of benefits. For instance,
initial strategic planning typically includes a review of the evidence and
critical success factors to inform priorities, assess options, and guide plans.
Subsequently, evidence may help to drive enabling functionality like decision
support, redesigning workflows to capture potential productivity improvements,
addressing barriers to adoption like user interface challenges or inconsistent
policies, or harnessing data for secondary use. While any of these factors may
be identified during project planning, often the full value proposition emerges
over time, with thoughtful observation, analysis, and ongoing response to
feedback from users.
Traditional approaches to knowledge translation (KT), such as publications and conferences, remain central to the long-term
objective of building a rich and robust knowledge base. They both enable
communication to a range of audiences, and conferences increase the opportunity
to build collaboration from that communication. Peer-reviewed literature helps
to create quality standards that allow those applying the results to apply them
appropriately and confidently. Limitations of peer review publication include
delays (often in excess of a year), the effort required to complete the
process, and disincentives for many outside the academic community to
contribute findings.
In addition, KT approaches have been rapidly evolving, to both get evidence into the hands of
decision-makers more quickly and encourage broad participation. Within specific
projects, rapid cycle improvement methods can help to get actionable
information into the hands of those with the ability to adapt plans and
processes. Ideally, projects are designed with an optimization period. This
ensures that resources are available to make adjustments as the process
unfolds. Often quality improvement cycles are built into broader change
management methodologies. The National Change Management Framework and
supporting toolkit, developed by the Pan-Canadian Change Management Network
with the support of Canada Health Infoway, positions evaluation as a central
activity and provides some of the practical guidance required to enable
long-term success (Infoway, 2013).
An expanding range of approaches beyond peer-reviewed journals are also being
used to share knowledge across organizational boundaries. For instance,
webinars, often tied to the kinds of communities described above, are
increasingly prevalent and valuable. There are also well-regarded print/online
journals and magazines, and growing online and social media options. Each of
these has unique pros and cons, with considerations such as reducing
disincentives to sharing experiences, streamlining process requirements and
prerequisites, removing complexity to access information, and ensuring that the
quality of information can be assessed by users. Integrated KT and multi-channel communications are important considerations.
26.6 Capacity Building Examples
This section provides selected examples of the capacity building outputs that
are mentioned in the Foundation section (26.3) of this chapter. The examples
cover peer support communities, knowledge and learning resources, and formal
evaluation courses.
26.6.1 Peer Support Communities
Canada Health Infoway initiated a Benefits Evaluation community in 2007, as work
was underway to put the evaluation strategy into operation. Early roadblocks
had emerged in gaining buy-in from project teams to take accountability for
evaluation and ensuring that there were people with the right skills to be
successful. The community directly addressed these roadblocks, bringing
stakeholder groups together and showcasing practical methodologies, effective
partnerships, and the value of having evidence. Much credit for the early
success of this community goes to the staff of the Newfoundland and Labrador
Centre for Health Information, who brought substantial expertise to this forum
and demonstrated the collaborative relationship they had achieved between
implementers and evaluators (NLCHI, n.d.). Today, there is strong participation from many groups across Canada and
the community contributed substantially to the development of a series of
indicator sets, which are included in Infoway’s Benefits Evaluation Technical report (Infoway, n.d.). It has evolved over the years to focus on emerging areas of need and to engage a
broader audience. In addition, Canada Health Infoway frequently brings
evaluation expertise into other Infoway-facilitated communities, like
jurisdictional implementers groups, clinician reference groups or InfoCentral
communities.
A further example is the virtual eHealth Benefits Evaluation Knowledge
Translation (BE-KT) community, which evolved from the University of Victoria’s (UVic) eHealth Observatory (Lau, n.d.). In 2012-13, researchers at the eHealth
Observatory facilitated a virtual learning community in eHealth evaluation with
a broad membership including implementers, policy-makers and academia. This
community featured live online sessions with presentations from mentors,
follow-up questions to prompt online discussions, and resources and links to
support members in their evaluation activities (Bassi, Lau, Hagens, Leaver, & Price, 2013). The community attracted over 130 participants, many from outside
academia, who were seeking the knowledge and network to increase the use of
evaluation in their organizations. Over an 18-month period, the BE-KT community website was visited 4,425 times and viewed 14,683 times by both
registered and unregistered members. Additionally during that period, 28 live
seminar sessions were held on different topics related to eHealth evaluation.
The presenters included researchers from the eHealth Observatory, Infoway
benefits realization staff and jurisdictional representatives. The overall
feedback from community members was largely positive, in that the effort had
raised awareness of the importance of BE, where to find BE resources, and how to apply the BE Frameworks, methods and tools. Interested readers can refer to the final report
and lessons learned from the eHealth Observatory website (Bassi, 2014).
26.6.2 Knowledge and Learning Resources
Over the years, a growing number of online knowledge and learning resources on
eHealth evaluation have been published. Examples of the organizations and
groups that provide publicly available eHealth evaluation resources over the
Internet are listed below.
- Canada Health Infoway maintains a rich repository of knowledge resources in benefits evaluation on its website (Infoway, n.d.). These resources include the Infoway BE Framework, the BE technical indicator report, and published jurisdictional BE reports in its online resource centre.
- The Newfoundland and Labrador Centre for Health Information has published the outputs of its benefits evaluation work done over the years on its website (NLCHI, n.d.). These resources include an inventory of published electronic health record (EHR) initiatives across Canada, a review of published EHR evaluation literature and reports, and a proposed evaluation framework for EHR initiatives. In particular the proposed framework describes a collaborative process working with stakeholders to develop meaningful and relevant evaluation study design and measures that can be implemented by healthcare organizations.
- University of Victoria eHealth Observatory: This is part of a five-year chair in eHealth award jointly funded by CIHR and Infoway to examine the effects of health information systems deployment in Canada. The website contains a set of eHealth evaluation frameworks, rapid evaluation methods and sample evaluation tools that can be applied and/or adapted in field evaluation studies of different eHealth systems (Lau, n.d.).
- The Agency for Healthcare Research and Quality was funded as part of the national strategy in the United States to improve the quality of care through IT. Over the years, the AHRQ Health IT website has amassed a rich set of resources that include health IT evaluation toolkits, AHRQ-funded health IT projects, published health IT evaluation studies and position papers in health IT adoption and evaluation (AHRQ, n.d.).
- Members of the European Federation of Medical Informatics (EFMI) working group on Evaluation (EVAL) and the International Medical Informatics Association (IMIA) working group on Technology Assessment and Quality Improvement have published a set of guidelines for the reporting of evaluation studies in health informatics called STARE-HI (Talmon et al., 2009; Brender et al., 2013) and for good evaluation practice in health informatics called GEP-HI (Nykänen et al., 2011). These guidelines are invaluable resources that provide guidance on how one should design, conduct and report high-quality eHealth evaluation studies in the field setting.
- Organisation for Economic Cooperation and Development (OECD, 2013) offers model surveys and other benchmarking tools related to health information and communications technologies.
- Institute for Health Information Studies, UMIT — Researchers at the University for Health Sciences, Medical Informatics and Technology (UMIT) have published an online inventory of evaluation studies in medical informatics called the Web-based evaluation database or EvalDB (see Ammenwerth & de Keizer, 2005). This database contains over 1,800 published health IT evaluation studies and systematic reviews, and is updated on an ongoing basis. It is one of the most comprehensive inventories on eHealth evaluation studies published to date.
- The National Institutes of Health Informatics (NIHI) provides a suite of online education sessions, including a series on evaluation, with sections on qualitative and quantitative methods, that can be accessed at www.nihi.ca
26.6.3 Formal Evaluation Courses
The School of Health Information Science at the University of Victoria has been
offering a graduate level course on eHealth evaluation since 2010 as part of
its MSc program in health informatics. This course is delivered as a five-day
intensive on-campus workshop with two weeks of online follow-up through
Web-conference sessions. The course goals are to help students: (a) understand
the types of evaluation frameworks, methods and studies available; (b) become
knowledgeable in how evaluation studies are designed, conducted and reported;
and (c) apply evaluation findings to inform healthcare policy and practice. The
workshop is made up of class lectures and discussions, case studies, guest
speakers, and individual and group assignments. The assignments provide
students with opportunities to appraise published eHealth evaluation studies,
and to apply best eHealth evaluation practice guidelines in eHealth case
examples while designing an eHealth field evaluation study. The course covers
(but is not limited to) the following topics:
- Methods of appraising and reporting eHealth evaluation studies (e.g., assessment of methodological quality, best practices in eHealth evaluation);
- eHealth evaluation frameworks (e.g., Infoway Benefits Evaluation Framework, Clinical Adoption Framework);
- eHealth evaluation study design and methods (e.g., quantitative versus qualitative, mixed methods, experimental, observational studies, surveys, usability studies); and
- examples of published eHealth evaluation studies (e.g., reviews, controlled and descriptive studies).
There are other Canadian universities that offer health-related evaluation
courses as part of their graduate programs in eHealth. For example, students in
the MSc eHealth program at McMaster University can enrol in such elective
courses as Health Economics and Evaluation (C711), Fundamentals of Health
Research & Evaluation Methods (HRM721), Economic Analysis for the Evaluation of Health Services (HRM737), and Approaches to the Evaluation of Health Services (HRM762). Students in the MSc of Health Informatics program at the University of
Waterloo can enrol in the Evaluation of Public Health Program (PHS614) course as an elective. There is also an MSc program in Health Evaluation at
the University of Waterloo with its entire curriculum focused on program
evaluation in public health and health systems. Note that the courses mentioned
at these universities are not necessarily specific to eHealth.
26.7 Looking Ahead
Some important opportunities emerge through exploring capacity building for
evaluation. Partnerships between academia and such other stakeholders as
implementation teams, clinical users, and funders, have proven so mutually
beneficial as to warrant expansion. There is value in continuing to build,
maintain, and share the pool of such resources as data collection tools and
sample methodologies. Diversification of training opportunities from degrees to
courses, workshops and online offerings, has been important for expanding the
pool of evaluators. Integrating evaluation and optimization into the project
life cycle has likewise proven valuable. Sharing and acting on the results of
evaluation, both locally and more broadly, is also important, just as
evidence-informed care has become the standard for clinical practice. Much
progress has been made, but many opportunities remain to continue to build
capacity in this domain.
References
Agency for Healthcare Research and Quality (AHRQ). (n.d.). Health information technology. Rockville, MD: Author. Retrieved January 20, 2016, from https://healthit.ahrq.gov/
Ammenwerth, E., & de Keizer, N. (2005). Inventory of evaluation studies of information technology in health care: Trends
in evaluation research, 1982 to 2002. Tyrol, Austria: UMIT. Retrieved from https://evaldb.umit.at/
Baker, C., Charlebois, M., Lopatka, H., Moineau, G., & Zelmer, J. (2016). Influencing change: Preparing the next generation of
clinicians to practice in the digital age. Healthcare Quarterly, 18(4), 5–7.
Bassi, J. (2014). Increasing capacity in eHealth benefits evaluation: eHealth benefits knowledge
translation community. Final report and lessons learned. Toronto: Canada Health Infoway. Retrieved from http://ehealth.uvic.ca/community/2014.04.09-KT%20Community%20Full%20Report-v1.0.pdf
Bassi, J., Lau, F., Hagens, S., Leaver, C., & Price, M. (2013). Knowledge translation in eHealth: Building a virtual
community. Studies in Health Technology and Informatics, 183, 257–262.
Brender, J., Talmon, J., de Keizer, N., Nykänen, P., Rigby, M., & Ammenwerth, E. (2013). STARE-HI: Statement on reporting of evaluation studies in health informatics,
explanation and elaboration. Applied Clinical Informatics, 4(3), 331–358.
Cusack, C. M., & Poon, E. G. (2007). Health information technology evaluation toolkit (AHRQ Publication No. 08-0026-EF). Rockville, MD: Agency for Healthcare Research and Quality.
Hagens, S. (2009). Canadian EHR: Early benefits and the journey ahead. Healthcare Information Management & Communications Canada, 23(4), 24–26.
Infoway. (2006). Benefits evaluation survey process — System & use assessment survey. Canada Health Infoway benefits evaluation indicators technical report. Version
1.0, September 2006. Toronto: Author. Retrieved from
https://www.infoway-inforoute.ca/en/component/edocman/resources/toolkits/change-management/national-framework/monitoring-and-evaluation/resources-and-tools/991-benefits-evaluation-survey-process-system-use-assessment-survey
Infoway. (2012). Benefits evaluation survey process — System & use assessment survey. Canada Health Infoway benefits evaluation indicators technical report. Version
2.0, April 2012. Toronto: Author.
Infoway. (2013). A framework and toolkit for managing eHealth change. Toronto: Author. Retrieved from https://www.infoway-inforoute.ca/en/component/edocman/resources/toolkits/change-management/methodologies-and-approaches/1659-a-framework-and-toolkit-for-managing-ehealth-change-2
Infoway. (n.d.). Benefits evaluation. Toronto: Author. Retrieved from
https://www.infoway-inforoute.ca/en/solutions/benefits-evaluation
Labin, S. N., Duffy, J., Meyers, D., Wandersman, A., & Lesesne, C. (2012). A research synthesis of the evaluation capacity building
literature. American Journal of Evaluation, 33(3), 307–338. doi: 10.1177/1098214011434608
Lau, F. (2014, May 5). Value of eHealth evaluation research in supporting
healthcare reform in Canada [Web log post to Infoway Connects]. Ottawa: Canada Health Infoway. Retrieved from
http://infowayconnects.infoway-inforoute.ca/2014/05/05/value-of-ehealth-evaluation-research-in-supporting-healthcare-reform-in-canada
Lau, F. (n.d.). University of Victoria (UVic) eHealth observatory. Victoria, BC: University of Victoria. Retrieved from http://ehealth.uvic.ca/index.php
Lau, F., Hagens, S., & Muttitt, S. (2007). A proposed benefits evaluation framework for health
information systems in Canada. Healthcare Quarterly, 10(1), 112–118.
Lau, F., Price, M., & Keshavjee, K. (2011). From benefits evaluation to clinical adoption — Making sense of health information system success. Healthcare Quarterly, 14(1), 39–45.
Newfoundland and Labrador Centre for Health Information (NLCHI). (n.d.). Health analytics: Benefits evaluation. St. John’s: Author. Retrieved from http://nlchi.nl.ca/index.php/quality-information/health-analytics/benefits-evaluations
Nykänen, P., Brender, J., Talmon, J., de Keizer, N., Rigby, M., Beuscart-Zephir, M.
C., & Ammenwerth, E. (2011). Guideline for good evaluation practice in health
informatics (GEP-HI). International Journal of Medical Informatics, 80(12), 815–827.
Organisation for Economic Cooperation and Development (OECD). (2013). OECD guide to measuring ICTs in the health sector. Paris: Author.
Preskill, H., & Boyle, S. (2008). A multidisciplinary model of evaluation capacity building. American Journal of Evaluation, 29(4), 443–459. doi: 10.1177/1098214008324182
Talmon, J., Ammenwerth, E., Brender, J., de Keizer, N., Nykänen, P., & Rigby, M. (2009). STARE-HI: Statement on reporting of evaluation studies in health informatics. International Journal of Medical Informatics, 78(1), 1–9.
Annotate
EPUB