Skip to main content




“Chapter 20: Evaluation of eHealth Adoption in Healthcare Organizations” in “Handbook of eHealth Evaluation: An Evidence-based Approach”
Chapter 20
Evaluation of eHealth Adoption in Healthcare Organizations
Jim Warren, Yulong Gu
20.1 Introduction
Healthcare innovations, including eHealth technologies, aim to support faster,
more reliable and more transparent healthcare services. These technologies may
facilitate the design and delivery of high-quality healthcare, improved patient
outcomes and patient safety, and further generation of innovation in healthcare
processes (Chaudhry et al., 2006; Finkelstein et al., 2012; Lau, Kuziemsky,
Price, & Gardner, 2010). However, adoption of eHealth technologies in healthcare
organizations involves complex sociotechnical issues and often fails (Kaplan & Harris-Salamone, 2009). Adopters of eHealth technologies face challenges such
as the complexity inherent to the healthcare services context and a multitude
of risk factors in the eHealth development/procurement and implementation
processes. These challenges need to be understood and promptly addressed to
support successful implementation and sustained use of health innovations. As
such, one fundamental starting goal for any process of eHealth evaluation is to
evaluate adoption, particularly in terms of the uptake of the technology by the
intended end users.
In chapter 6 we categorized uptake (or simply “use”) under the “product” dimension of our Health Information Technology (IT) Evaluation Framework as a component of usability. While this is valid, uptake
is an essential step on the pathway to any and all progress on the “impact” dimension of the framework (i.e., for improvement in work and communication
patterns, organizational culture, safety and quality of healthcare, or overall
effectiveness). This is well illustrated by a study of clinical decision
support effectiveness for chronic condition management that was undertaken in
the context of general practice in the United Kingdom (U.K.). The system showed no impact whatsoever on process of care or patient
outcomes, while noting that usage was low (Eccles et al., 2002). Further
in-depth investigation with users identified barriers to use that included
concerns about the timing of the guideline trigger, ease of use and helpfulness
of content, as well as problems in the delivery of training (Rousseau, McColl,
Newton, Grimshaw, & Eccles, 2003). The lack of overall impact becomes unsurprising once the story
with respect to barriers to adoption is understood.
Health IT adoption has received considerable attention in recent years. For instance, the
Healthcare Information and Management Systems Society (HIMSS) Analytics Electronic Medical Record Adoption Model (EMRAM) ranks healthcare organization progress into one of seven stages based on the
types of systems that are in place (HIMSS, 2015). In the government context, health IT adoption in the United States is being driven by financial incentives that are
tied to achievement of a spectrum of specific “meaningful use” criteria (Marcotte et al., 2012). Similarly, the U.K. Quality and Outcomes Framework (QOF) provides substantial financial incentives to general practitioners (GPs; i.e., community-based family physicians) based on monitoring and management
levels as automatically assessed through their practice electronic medical
record systems (Lester & Campbell, 2010). It could be said that the aforementioned models are tied to
ticking the boxes to achieve financial incentives (or “bragging rights”). At a more conceptual level, sophistication in health IT use can be broken down into technological support, information content,
functional support, and IT management practices (Raymond & Paré, 1992), as well as extent of systems integration (Paré & Sicotte, 2001). A further dimension of IT sophistication, in terms of application domain, concerns administrative
activities, patient care and clinical support; and in any of the above domains
one can assess the range of computerized activities and system availability, as
well as extent of use (Kitsiou, Manthou, Vlachopoulou, & Markos, 2010; Paré & Sicotte, 2001).
In this chapter, we illustrate the approach to evaluation of system adoption
with two case studies that are based on electronic referral (eReferral)
technologies. The United States National Library of Medicine has defined “referral and consultation” as “the practice of sending a patient to another program or practitioner for
services or advice which the referring source is not prepared to provide” (National Library of Medicine, 2014), which implies a transfer of care. In the
New Zealand (N.Z.) healthcare context, referral is most often from a GP to a specialist medical service. Moreover, the general practices in N.Z. tend to be private for-profit or charitable trust organizations (although
supported by government subsidies). Individual general practice sites are small
(for instance, they may be part of a strip mall) and are characterized as being
situated in “the community” alongside other services including a community pharmacy and home-care nursing.
Conversely, specialist services are provided in large part at public hospitals
operated directly by District Health Boards; eReferral aims to use IT to bridge the communications gap between these two types of providers and their
contrasting sites. While eReferral may simply replace a postal or fax process
with e-mail, more advanced IT offers opportunities for rich and rapid feedback that transforms the process. At
the more extreme end, eReferral can merge into a portal-based “shared care” model that challenges the original concept of referral (Gu, Warren, & Orr, 2014).
Although the two case studies in this chapter are both situated in the New
Zealand context of bridging community-hospital divides, we believe they can be
generalized to any situation where IT is mediating healthcare communication across provider roles and sites. Moreover,
these cases serve to illustrate contexts where providers can potentially work
around uptake of the technology (e.g., side-stepping with phone and fax) and
thus adoption is a valuable measure of success.
20.2 Selected Case Study Examples
In eHealth evaluation literature, both qualitative and quantitative methods have
been used to measure a range of indicators on usability and outcome. The value
of these evaluations is not limited to collecting robust evidence on the impact
of eHealth innovations, which of course is important for measuring project
success or supporting the decision-making process with regard to technology
purchase and further rollout (or abandonment) of the technology adoption. The
evaluation research can also provide substantial support to the technology
development and implementation process. That is, if you evaluate early and
often, learnings from evaluation can be used to improve the acceptability and
effectiveness of the technology in its current implementation sites, as well as
being fed into subsequent phases of implementation.
The following two examples of eHealth evaluation are introduced to demonstrate
impact analysis-focused evaluation and Action Research-oriented evaluation,
respectively.
20.2.1 Case Study One — Impact of an Electronic Referral System
This case describes a retrospective evaluation study of the impact of
introducing an eReferral system that manages referrals from a community into
public secondary healthcare services (Warren, White, Day, Gu, & Pollock, 2011). The eReferral system evaluated was introduced in 2007 to 30
referring general medical practices and 28 hospital-based secondary services at
an N.Z. regional healthcare jurisdiction, Hutt Valley District Health Board (HVDHB). HVDHB serves a population of 150,000 and has one principal facility for provision of
secondary services, the 260-bed Hutt Hospital.
By October 2007, eReferral to 28 services at Hutt Hospital — to all services but the Emergency Department — had been deployed across 25 general practices. A GP, or in some cases a practice nurse, creates a referral from within their
electronic medical record system (in New Zealand, it is often called practice
management system, or PMS) using PMS-based templates. The form is pre-populated with PMS data including the patient’s demographics and medical history, which can be edited by the GP prior to submission. The GP referral is messaged as Extensible Markup Language (XML) via the regional service where it is mapped to Health Level 7 (HL7) message format and sent on to the Integration Engine, which underlies the HVDHB’s Clinical Workstation. The Integration Engine generates an acknowledgement back
to the network confirming receipt of the referral, which is relayed back to the
GP PMS.
Hospital staff can view the eReferral and process the eReferrals in a Clinical
Workstation displaying in automated role-based work lists. It allows clinicians
to communicate within the service administration to manage clinics, required
tests, or follow-up with patients prior to the appointment. The referral
management activity creates an automated sequence of process events through
clinician triage (assignment of priority) and, if not declined at triage,
regarding creation of a booking for a first specialist appointment (FSA). The relevant referral workflow is shown in Figure 20.1.
Figure 20.1. Referral workflow.
As of November 2010, there was no central referral management at Hutt Hospital;
however, the general Outpatient Department with eight administration staff
manages 15 services; the remainder receive and manage their own referrals. In
the context of an N.Z. public hospital, services undertake clinical triage of referrals that assigns
priority levels to them, including declining to service some requests (noting
that private services are also available). Referral management within the
hospital involves two concurrent systems: a Clinical Workstation and a Patient
Information Management System (PIMS, providing general inpatient tracking, by a different vendor from the Clinical
Workstation). All referrals (electronic or paper) are logged to the PIMS.
Based on a literature review, Hutt eReferral project business case and
documentation review, and stakeholder feedback, the eReferral evaluation
hypothesis was developed as: eReferral, if uptake is substantial and sustained,
should result in more efficient (and thus timely), as well as more transparent,
processing of referrals. To test this hypothesis, 33,958 transactional records
from October 2007 to the end of October 2010 were collected from the eReferral
database, as stored with the Clinical Workstation: 108,652 records of all GP referrals (electronic and paper) from the hospital PIMS were extracted from January 2004 to end October 2010. These data allowed
examination of eReferral’s impact, in terms of uptake (eReferral volume over time and proportion of
referrals that are electronic) and changes in latency from letter date to
triage at secondary services. The extracts, de-identified and using encrypted
health identifiers (matchable across data sets, but not reidentifiable by the
evaluators), were made available to the evaluators by HVDHB. Qualitative feedback from interviews and focus groups further provided insight
on benefits and/or liabilities of the solution, including influence on workflow
and usability.
The eReferral use rose steadily to 1,000 transactions per month in 2008,
thereafter showing moderate growth to 1,200 per month in 2010. Rate of
eReferral from the community in 2010 is estimated at 56% of total referrals to
the hospital from general practice, and as 71% of referrals from those having
done at least one referral electronically. Figure 20.2 graphs the PIMS volumes for referral records indicating source as General Practice along with
the transaction volume for all eReferrals (from the Clinical Workstation
database) by year. A boost in total general practice referrals after relative
stability in earlier years, tracking with increased eReferrals particularly
between 2008 and 2009, indicates interaction of eReferral uptake and increase
in total referrals.
Figure 20.2. General practice referral volumes by year (* 2010 data inflated by 6/5ths to
estimate full year volume).
Note. From “Introduction of electronic referral from community associated with more timely
review by secondary services,” by J. Warren, S. White, K. Day, Y. Gu, and M. Pollock, 2012, Applied Clinical Informatics, 2(4), p. 556. Copyright 2011 by Schattauer Publishing House. Reprinted with
permission.
Referral latency from letter date to hospital triage improves significantly from
2007 to 2009 (Kolmogorov-Smirnov test, p < 0.001), from a paper referral median of eight days (inter-quartile range, IQR: 4–14) in 2007 to an eReferral median of five days (IQR: 2–9) and paper referral median of six days (IQR: 2–12) in 2009; see also Figure 20.3.
Qualitative feedback confirmed that the significant speed-up in referral
processing shown in Figure 20.3 was achieved without changes in staffing
levels. The evaluation concluded that substantial, rapid, and voluntary uptake
of eReferrals was observed, associated with faster, more reliable, and more
transparent referral processing. Clinical users appreciated improvement of
referral visibility (status and content access); however, both GPs (referral senders) and specialists (receivers) point out system usability
issues such as difficulties surrounding attachments in terms of both attaching
at the sender’s end and opening at the receiver’s end.
Figure 20.3. Median, first and third quartile (‘Med’, ‘1stQ’ and ‘3rdQ’ respectively) of letter-to-triage latency for e-referrals and paper referrals
by year.
Note. From “Introduction of electronic referral from community associated with more timely
review by secondary services,” by J. Warren, S. White, K. Day, Y. Gu, and M. Pollock, 2012, Applied Clinical Informatics, 2(4), p. 557. Copyright 2011 by Schattauer Publishing House. Reprinted with
permission.
20.2.2 Case Study Two — Promoting Sustained Use of a Shared Care Planning Program
The evaluation of New Zealand’s National Shared Care Planning Program (NSCPP) was a case of Action Research-oriented evaluation that was planned during the
eHealth program’s business case stage in 2010. The evaluation was concurrently undertaken with
the pilot development and implementation (2011 to 2012) with the aim to assess
success as well as to support the pilot processes (Gu, Humphrey, Warren, & Streeter, 2014; Gu, Humphrey, Warren, Tibby, & Bycroft, 2012; Warren, Gu, & Humphrey, 2012). This example applied the principle that eHealth evaluation
should begin before the new technology is introduced into the health workflow
and be planned for along with the planning of the implementation itself. It
demonstrated how evaluators could work in collaboration with the broader
eHealth project team to understand and improve the user experience.
NSCPP took an IT-enabled approach to support shared care, shared decision-making, and care
planning for long-term condition management. A Web-based technology solution
was developed to provide a shared care record and coordination capability,
including care plans, messages, and task assignment, for multidisciplinary care
teams including patients themselves. The goal was to enable a patient-centred
approach to care irrespective of the current care provider in general,
specialist or allied healthcare settings, by facilitating both care
coordination and supported self-management. The technology was integrated with GPs’ PMS and has browser access to patient records for other community-based providers,
hospital providers and patients.
NSCPP evaluation took an iterative action research approach applying both qualitative
and quantitative methods. The pilot software was refined in response to ongoing
feedback from the evaluation, which emphasizes attention to user feedback
through interviews, focus groups and questionnaires to both participating
healthcare professionals and patients, thematic analysis of communication
records via the pilot system such as tasks and messages, as well as quantitative analysis of pilot system transaction records and health service usage data. Findings
were used to identify any pressing issues and, according to evidence and expert
experience, the corresponding recommendations for addressing the problems. This
multifaceted data collection framework supported rapid synthesis of information
and routine feedback loops to the program team to inform ongoing approaches in
the rollout of the program. With an action research orientation, the methods
and tools for the evaluation study were constantly examined and developed to
accommodate the NSCPP development needs.
The program uptake, in terms of technology usage pattern and user experience,
was closely monitored via qualitative feedback as well as analysis of user
activities interacting with the pilot technology. And these were examined in
the context of the users’ professional roles, for example GP, general practice nurse, specialist physician, secondary nurse, allied health
professional (including pharmacist and physiotherapist), and patient. Figure
20.4 captures user activities of creating and modifying tasks, notes, care plan
elements and messages in the first nine months of the program, including the “Exploration” Phase, from March to June 2011 with one participating general practice and one
secondary service and the “Limited Deployment” Phase (since July), extending to eight general practices, five secondary
services and four community pharmacies. The modification activity includes
marking a task as completed (which is the only such action available to
patients in the patient portal at the time). Figure 20.5 shows user activities
in terms of viewing the records, including tasks, notes, plans, messages,
diagnosis, measurement results, medication and record summary, by month and
roles.
Figure 20.4. Sum of entries created or modified (over notes, care plan elements, messages
and tasks) by role
The above figures show the emergence of patient and allied users from August to
October, with steady growth in allied health professional’s activity and patient viewing. The role of specialist physicians as direct
users, particularly with respect to element creation/modification, is quite
small. The role of nurses is dominant for viewing and the creation or
modification in all time periods except for a few cases in the early “Exploratory Phase” in February. In “Limited Deployment Phase” (since July), the role of general practice in element creation is highly
dominant (at least two-thirds of entries), but is more balanced by other users
with respect to viewing (roughly 50%). The observed pattern of nurses being the
most active users extended to task assignment, an indicator of who is “driving”. However, user interviews indicated that this might underestimate the guidance
provided by physicians (indeed, at times, even literally looking over the
shoulder of the nurse operating the software). And, of course, the technology
does not capture verbal communications that occur between nurses and physicians
onsite. It is recognized that there is exciting potential for workforce
transformation with NSCPP, but with the related challenge of defining the new responsibilities (and
determining if these are met by new people or reorientation of existing roles).
The question of who funds the time to create care plans had been raised
repeatedly. A designated — and appropriately compensated — lead care coordinator (perhaps a nurse), would facilitate the solution of a
further problem regarding the need to ensure timely responsiveness to issues
emerging in the care of any given patient.
Figure 20.5. Elements viewed by user role based on number of NSCPP system audit log entries.
In fact, NSCPP has highlighted a range of fundamental challenges, including: (a)
sociotechnical issues (e.g., interoperability in the non-standardized system
environment, shortage of workforce skills to deliver care planning, lack of
time/personnel to implement shared care, IT interface challenges, and mechanisms to involve patients and families); (b)
governance of information, clinical workflows, privacy and funding models; and
(c) patient safety concerns in relation to information access (and potentially
input) by patients (e.g., detailed clinical communications can present
difficulty for patient interpretation and are readily misunderstood). Moreover,
there was no agreed definition consensus — in theoretic ground and among participating organizations or individuals — for “shared care planning” or the essential elements, roles or responsibilities needed for its delivery.
On the other hand, most pilot participants acknowledged the notion that shared
care is equally about sharing the care and responsibility with patients and
their families as it is about sharing care within the interdisciplinary team.
The NSCPP evaluation concluded that while many issues remain unresolved, the NSCPP experience is making the issues far more concrete and is building a wide
community of clinical, and patient, users that now have first-hand experience
to inform continued technology and policy development.
20.3 Discussion
Adoption is the essential first step in benefits realization for health IT. All evaluation studies must include investigation of adoption or risk
misleading results. Substantial and sustained uptake in use of a system
indicates success across a range of issues in project management, leadership,
deployment, training, usability and overall “value proposition” of the system for the users. Conversely, failure in adoption indicates a
breakdown. Continuing to pursue other aspects of evaluation in the face of poor
adoption can lead to mistaking the IT system for the cause in a situation where other factors in fact account for
observed variations in performance. Moreover, it is important to recognize that
adoption is not an all-or-nothing proposition. Users may adopt some features of
a system but not others, or uptake may be greater with one class of users than
another (or at one site versus another). Such variation in uptake warrants more
in-depth investigation and can lead to the discovery of opportunities for
improvement wherever needed — in usability, training, system features or broader workflow and work role
expectations.
Obtaining quantitative measures of adoption is usually relatively easy in the
context of health IT because the systems, by their nature, lay down transactional “footprints” of their activity: a computerized physician order entry (CPOE) system creates records of orders, an electronic referral system creates
records of referrals. Moreover, most systems will create usage logs for other
purposes (e.g., security audit), although advanced planning to ensure the
logging of the right information can greatly facilitate subsequent analysis.
The greatest benefits of adoption evaluation come, however, with the qualitative
analysis in follow-up to areas where uptake is quantitatively weak. When
interviewed, users can generally state the barriers to adoption that they are
facing. Further, particularly if the interviews are structured to support such
feedback, users may already have suggestions for improvements, or shed light on
fundamental problems that underlie failure to adopt the system. This
information can then be fed back to the broader project team to reduce the
barriers as effectively as possible. As with all aspects of evaluation, the
opportunities are greatest with a deployment process that is iterative and
staged so that time and resources are available to learn from initial
evaluation activities and to apply those lessons in later deployments.
20.4 Summary
In this chapter we have emphasized the importance of studying adoption in terms
of substantial and sustained uptake of the system by its intended users as a
cornerstone of eHealth evaluation. Through two case studies we have illustrated
quantitative and qualitative approaches, with the quantitative dimensions
underpinned by analysis of the quantity, source and timing of system
transactions and the qualitative dimension underpinned by interviews. We have
shown that uptake can be heterogeneous — with differences in usage levels for different classes of users, for example.
We have seen that in follow-up with users about barriers to uptake we can
achieve insights into both the current and potential role of the health IT-based innovation.
References
Chaudhry, B., Wang, J., Wu, S., Maglione, M., Mojica, W., Roth, E., . . .
Shekelle, P. G. (2006). Systematic review: impact of health information
technology on quality, efficiency, and costs of medical care. Annals of Internal Medicine, 144(10), 742–752.
Eccles, M., McColl, E., Steen, N., Rousseau, N., Grimshaw, J., Parkin, D., & Purves, I. (2002). Effect of computerised evidence based guidelines on
management of asthma and angina in adults in primary care: cluster randomised
controlled trial. British Medical Journal, 325(7370), 941.
Finkelstein, J., Knight, A., Marinopoulos, S., Gibbons, M. C., Berger, Z.,
Aboumatar, H., . . . Bass, E. B. (2012, June). Enabling patient-centered care through health information technology. Evidence Reports / Technology Assessments No. 206. Rockville, MD: Agency for Healthcare Research & Quality.
Gu, Y., Humphrey, G., Warren, J., & Streeter, J. (2014). Measuring service utilisation impact of the Shared Care
Planning Programme. Health Care and Informatics Review Online, 18(1), 16–24.
Gu, Y., Humphrey, G., Warren, J., Tibby, S., & Bycroft, J. (2012). An innovative approach to shared care — New Zealand pilot study of a technology-enabled national Shared Care Planning
Programme. Paper presented at the Health Informatics New Zealand Conference, November 7 to
9, Rotorua.
Gu, Y., Warren, J., & Orr, M. (2014). The potentials and challenges of electronic referrals in transforming
healthcare. New Zealand Medical Journal, 127(1398), 111–118.
Healthcare Information and Management Systems Society (HIMSS). (2015). Electronic Medical Record Adoption Model (EMRAM). Retrieved from
http://www.himssanalytics.com/provider-solutions#block-himss-general-himss-prov-sol-emram
Kaplan, B., & Harris-Salamone, K. D. (2009). Health IT success and failure: recommendations from literature and an AMIA workshop. Journal of the American Medical Informatics Association, 16(3), 291–299. doi: 10.1197/jamia.M2997
Kitsiou, S., Manthou, V., Vlachopoulou, M., & Markos, A. (2010). Adoption and sophistication of clinical information systems
in Greek public hospitals: Results from a national web-based survey. In P.
Bamidis & N. Pallikarakis (Eds.), XII Mediterranean Conference on Medical and Biological Engineering and Computing
2010 (Vol. 29, pp. 1011–1016). Berlin, Heidelberg: Springer.
Lau, F., Kuziemsky, C., Price, M., & Gardner, J. (2010). A review on systematic reviews of health information system
studies. Journal of the American Medical Informatics Association, 17(6), 637–645. doi: 10.1136/jamia.2010.004838
Lester, H., & Campbell, S. (2010). Developing Quality and Outcomes Framework (QOF) indicators and the concept of ‘QOFability’. Quality in Primary Care, 18(2), 103–109.
Marcotte, L., Seidman, J., Trudel, K., Berwick, D. M., Blumenthal, D.,
Mostashari, F., & Jain, S. H. (2012). Achieving meaningful use of health information technology:
A guide for physicians to the EHR incentive programs. Archives of Internal Medicine, 172(9), 731–736. doi: 10.1001/archinternmed.2012.872
National Library of Medicine. (2014). 2014 MeSH descriptor data: Referral and consultation. Retrieved from
http://www.nlm.nih.gov/cgi/mesh/2014/MB_cgi?mode=&index=11500
Paré, G., & Sicotte, C. (2001). Information technology sophistication in health care: An
instrument validation study among Canadian hospitals. International Journal of Medical Informatics, 63(3), 205–223. doi: 10.1016/S1386-5056(01)00178-2
Raymond, L., & Paré, G. (1992). Measurement of information technology sophistication in small
manufacturing businesses. Information Resources Management Journal, 5(2), 4–16. doi: 10.4018/irmj.1992040101
Rousseau, N., McColl, E., Newton, J., Grimshaw, J., & Eccles, M. (2003). Practice based, longitudinal, qualitative interview study of
computerised evidence based guidelines in primary care. British Medical Journal, 326(7384), 314.
Warren, J., Gu, Y., & Humphrey, G. (2012). Usage analysis of a shared care planning system. Paper presented at the The AMIA 2012 Annual Symposium (AMIA 2012), November 3 to 7, Chicago.
Warren, J., White, S., Day, K., Gu, Y., & Pollock, M. (2011). Introduction of electronic referral from community
associated with more timely review by secondary services. Applied Clinical Informatics, 2(4), 546–564.
EPUB
Manifold uses cookies
We use cookies to analyze our traffic. Please decide if you are willing to accept cookies from our website. You can change this setting anytime in Privacy Settings.