Skip to main content
For step-4 above, the implementation efforts followed a sequence of phases from a single-site pilot project to a small-scale multisite trial, followed by a large-scale multi-region trial to a final system-wide rollout. An integral part of the initiative was the use of policy cost-effectiveness and budget impact analysis in single-site and multisite trials to determine the economic return. This section describes a case study on HIV testing at the VA Administration in terms of the multi-component intervention program, different implementation phases it went through over the years, and budget impact analysis done on the program.
“Chapter 18: Value for Money in eHealth” in “Handbook of eHealth Evaluation: An Evidence-based Approach”
Chapter 18
Value for Money in eHealth
Meta-synthesis of Current Evidence
Francis Lau
18.1 Introduction
Over the years a number of systematic reviews on studies that evaluated the
economic return of eHealth investments have been published in the literature.
Notable examples are the review on the financial effects of HIT by Low et al. (2013) based on 57 studies, the economics of HIT in medication management by O’Reilly, Tarride, Goeree, Lokker, and McKibbon (2012) based on 31 studies, and
the scoping review of HIS on value for money by Bassi and Lau (2013) based on 42 studies. At a glance,
these review findings seem favourable with over half of the studies showing
positive economic returns. However, one should be mindful the studies were
based on a diverse set of economic evaluation methods ranging from cost,
outcome to full economic analysis done through modelling and field settings
under different assumptions. Also the authors of these reviews have stressed
the limitations of their findings. They include the heterogeneity of the
eHealth systems examined, lack of detail on the system features, weak study
designs with diverse costing/valuation methods and measures, and difficulty in
generalizing the results. More importantly, not all of the studies were full
economic evaluations and, thus, it was difficult to determine if the reported
benefits were worth the investments. Few studies included the incremental cost
of producing an extra unit of outcome and the long-term effect of the eHealth
system.
In this chapter, three economic evaluation case studies that have been reported
in the literature are presented to demonstrate value for money in eHealth. The
three examples are: (a) a meta-synthesis of published eHealth economic reviews
(section 18.2); (b) the cost-effectiveness and utility of computer-supported
diabetes care in Ontario, Canada (section 18.3); and (c) the budget impact and
sustainability of system-wide human immunodeficiency virus (HIV) testing in the Veterans Affairs (VA) Administration in the United States (section 18.4). This is followed by a
summary of the current state of evidence on eHealth economic evaluation for
those involved in eHealth investment decisions (section 18.5).
18.2 Evidence on Value for Money in eHealth
This section examines the results of a meta-synthesis of the three published
eHealth economic evaluation reviews by Low et al. (2013), Bassi and Lau (2013),
and O’Reilly, Tarride, et al. (2012). The intention was to combine these reviews to
make sense of the current state of evidence on value for money in eHealth
investments. To do so, first the three original reviews were reanalyzed to
reconcile the mixed findings. Then the focus turned to the full economic
evaluation studies from these reviews that were published between 2000 and 2010
in order to gain insights on the economic return for specific types of eHealth
systems.
18.2.1 Synopsis of Economic Review Findings
The review by Low and colleagues (2013) found that 75.4% (or 43 out of 57) of
their studies had reported financial benefits in the form of revenue gains and
cost savings to stakeholders. The eHealth systems in question were: 42.1%
(24/57) CPOE/CDS (computerized provider order entry/clinical decision support); 45.6% (26/57) EHR; 8.8% (5/57) HIE; and 3.5% (2/57) combined. The proportions of systems with reported benefits
included: 88.4% (14/17) outpatient EHR; 69.2% (9/13) outpatient CPOE/CDS; 60.0% (6/10) inpatient CPOE/CDS; and 75.0% (3/4) Emergency Department HIE.
The review by Bassi and Lau (2013) found that 69.7% (or 23 out of 33) of their
high-quality studies (quality score ≥ 8/10) had reported positive returns. The eHealth systems in question were:
21.2% (7/33) primary care EMR; 18.2% (6/33) CPOE; 15.2% (5/33) medication management; 15.2% (5/33) immunization; 12.1% (4/33) HIS; 9.1% (3/33) disease management; 6.1% (2/33) clinical documentation; and 3.0%
(1/33) HIE. The proportions of systems with positive returns included: 71.4% (5/7) primary
care EMR; 50.0% (3/6) CPOE; 100% (5/5) medication management; 60.0% (3/5) immunization; 75.0% (3/4) HIS; 100% (3/3) disease management; and 100% (1/1) HIE. The remaining two clinical documentation systems had inconclusive results.
The review by O’Reilly, Tarride, et al. (2012) had 31 studies but only narrative descriptions
were reported because of the heterogeneity of the settings, systems and methods
involved. While the review was on medication management, the HITs evaluated varied and were mostly CPOE, CDS, MAR (medication administration record), and combined systems (67.7% or 21/31
studies) with the remaining as barcode, EMR or cardiopulmonary resuscitation (CPR), surveillance and ePrescribing systems. The authors did not summarize the
proportion of studies with economic benefits but a tabulation from the
narrative tables in the review showed that 67.7% or 21 out of 31 studies had
reported some cost benefits.
18.2.2 Meta-synthesis of Full Economic Evaluation Studies
Combined, the three reviews had a total of 121 evaluation studies published
during the period between 1993 and 2010. To make sense of the review findings,
a reanalysis of all of the studies was conducted by reconciling for duplicates,
selecting only those published in English between 2000 and 2010, then grouping
them by economic analysis method. This reanalysis led to a combined list of 81
unique studies, of which only 19 or 23.5% were considered full economic
evaluation. These 19 studies were then synthesized to provide an economic
evidence base for eHealth systems in primary care EMR, medication management, CPOE/CDS, institutional HIS, disease management, immunization, documentation and HIE, as defined by Bassi and Lau (2013).
A summary of the 19 studies by eHealth system, author-year, time frame, options,
cost, outcome, comparison method, results and interpretation is shown in the
Appendix. Of these 19 studies, seven were on primary care EMRs, three on medication management, three on CPOE/CDS, two on institutional HIS, and one each on disease management, immunization, documentation and HIE, respectively. For designs, 78.9% (15/19) were field studies and 21.1% (4/19)
were simulations. For methods, 73.7% (14/19) of the studies were cost-benefit,
21.1% (4/19) cost-effectiveness, and 5.2% (1/19) cost-consequence analysis. Two
studies also included cost-minimization as a second method. For valuation,
52.6% (10/19) of the studies included some type of discounting and/or inflation
to determine the present dollar value. Of the 19 studies, only 36.9% (7/19)
included one- or multi-way sensitivity analysis, 21.1% (4/19) reported the
incremental cost-effectiveness ratio (ICER), and 5.3% (1/19) included quality-adjusted life years (QALY).
For results, there were positive returns on investment in 100% (7/7) of the
primary EMR, 66.7% (2/3) of CPOE/CDS, 100% (2/2) of institutional EHR, and 100% for each of the disease management (1/1), immunization (1/1) and HIE (1/1) systems. The three medication management systems and the one
documentation system did show positive returns but only when specific
conditions were met. A closer examination of the results revealed that the
positive returns from the primary care EMR studies were mainly productivity-related in terms of cost savings and increased
revenues, with little mention of tangible improvement in health outcomes. One CPOE study had mixed results in that the simulated operating margins from CPOE adoption were positive over time for large urban hospitals but for not rural or
critical access hospitals. The three medication management studies were
inconclusive as they were dependent on certain contextual factors. For
instance, Wu, Laporte, and Ungar (2007) showed an incremental
cost-effectiveness ratio (ICER) of $12,700 per adverse drug event (ADE) averted. That translated to 32.3 ADEs averted per year or 261 events averted over 10 years, but the estimates
depended on the base rate of adverse drug events, system and physician costs
and the ability to reduce ADEs. Fretheim, Aaserud, and Oxman (2006) found the cost of thiazide intervention
to be twice the cost savings in year-1 before modest savings could be projected
in year-2 by expanding the intervention into a national program. Similarly, the
clinical documentation study by Kopach, Geiger, and Ungar (2005) showed ICER of $0.331 per day in average discharge note completion time, depending upon
physician utilization volume and length of the study.
Based on the results of this small set of full economic analysis studies there
is some evidence to suggest value for money in eHealth investments in selected
healthcare domains and types of systems. However, the number of studies is
small and caution is needed when generalizing these results to other settings.
18.3 Computer-supported Diabetes Care in Ontario
This section presents a set of economic evaluation modelling and field studies
on diabetes care done in the Canadian Province of Ontario over the past 15+
years that began around the year 1999. These include: (a) the application of
the Ontario Diabetes Economic Model (ODEM) in the COMPETE-II randomized trial; and (b) a mega-analysis on optimizing chronic disease
management that includes electronic tools for diabetes care. These studies are
described below.
18.3.1 Application of ODEM in COMPETE-II Trial
The ODEM is a simulation model that uses a set of parametric risk equations, based on
specific patient characteristics, to predict the cost and occurrence of
diabetes-related complications, life expectancy and quality-adjusted life years
(QALYs) over a 40-year time horizon. The ODEM is an adaptation of the United Kingdom Prospective Diabetes Study (UKPDS) Outcomes Model, which was developed with data from the UKPDS conducted as a randomized trial in the 1970s (Clarke et al., 2004).
Holbrook and colleagues (2009) conducted the COMPETE-II study in Ontario as a pragmatic randomized trial during 2002 and 2003. Its
objective was to determine if electronic decision support and shared
information with diabetic patients could improve their care in the community
setting. The study was conducted in three Ontario regions with 46 primary care practices and adult patients
under their care. The study results were then applied as inputs to the ODEM in a modelling study to estimate the long-term quality of life and cost
implications (O’Reilly, Holbrook, Blackhouse, Troyan, & Goeree, 2012). The key aspects of the two studies are summarized below in terms
of the diabetes cohort, intervention, economic analysis and projected benefit.
Diabetes Cohort – The study had 511 adult type-2 diabetic patients, with 252 randomized to the
intervention group and 258 to the control group. The mean follow-up time was
5.9 months and the median time since diagnosis of diabetes was 5.9 years. Key
risk factors from the trial were used as input to the ODEM such as HbA1c (glycated hemoglobin test), systolic blood pressure, cholesterol
and smoking status. The costs of resource use and diabetes-related
complications in the ODEM were derived from a prospective cohort of 734,113 diabetic patients over a
10-year period representing 4.4 million patient-years in Ontario.
Intervention – An individualized electronic decision support (DS) and reminder system for diabetes care was implemented in three Ontario regions
for use by 46 primary care practices over a one-year period. The intervention
included a Web-based diabetes tracker template for shared access by providers
and patients, an automated phone reminder for patients, and a colour tracker
page mailed to patients. The diabetes tracker template was interfaced with the EMR and allowed the display and monitoring of 13 risk factors, specifically blood
pressure, cholesterol, HbA1c, foot exam, kidney, weight, physical activity,
smoking, eye exam, acetylsalicylic acid or equivalent, ACE inhibitors, and flu shot. The automated phone reminder system prompted patients
every month to follow up on medications, labs and physician visits. The colour
tracker page was mailed to patients four times a year and was to be taken to
physician appointments.
Economic Analysis – The long-term cost-effectiveness of the shared DS and reminder system was examined. The respective economic evaluation components
are summarized below.
- Perspective – Ontario Ministry of Health;
- Options – A shared DS and reminder system versus usual care;
- Time Frame – 40-year time horizon after the 12-month study in 2002-03, assuming a one-year treatment effect at 5% discount rate in 2010 Canadian dollars;
- Input Costs – Program implementation costs and projected diabetes complications. Program costs included tracker development and testing, ongoing project management, and required IT infrastructure;
- Outcomes – Intermediate outcomes (HbA1c, blood pressure, cholesterol and smoking), life years, quality-adjusted life years (QALYs), incremental costs, and ICER;
- Comparison of Options – Cost-effectiveness analysis to compare lifetime effects of DS and reminder system versus usual care in expected costs per patient, life years, QALYs and ICER. Sensitivity analysis to compare lifetime effects of program and treatment effect duration of one, five and 10 years, and discount rates of 0%, 3% and 5%.
Projected Benefit – The intervention reduced HbA1c by 0.2 and systolic blood pressure by 3.95 mmHG,
and an overall relative risk reduction of 14% in the need for amputation. The
total cost of the intervention was $483,699, at a mean lifetime cost of $1,912
per patient receiving the intervention. The ODEM estimated the disease management costs to be $61,340 and $61,367 for the
intervention and control groups, respectively, at an incremental cost of –$26 per patient. The avoidance of complications would gain an additional 0.0117 QALYs, and an estimated ICER of $156,970 per life year and $160,845 per QALY. Sensitivity analysis showed an increase of 260% in QALYs from 0.0117 to 0.0421 when patients were treated for five years due to reduced
downstream complications, at an ICER of $186,728. When patients were treated for 10 years there was a sixfold
increase in QALYs gained, at an ICER of $173,654. Overall, the intervention led to slight improvement in short-term
risk factors and moderate improvement in long-term health outcomes. To do so,
the intervention had to be highly efficient and effective in its costs and care
processes.
18.3.2 Optimizing Chronic Disease Management Mega-analysis
In 2013 Health Quality Ontario (HQO) published a mega-analysis series drawn from 15 reports on the economic aspects
of community-based chronic disease management (CDM) interventions (HQO, 2013). The chronic diseases examined were diabetes, chronic obstructive
pulmonary disease (COPD), coronary artery disease, and congestive heart failure. The CDM interventions included discharge planning, continuity of care, in-home care,
specialized nursing practice, and electronic tools (eTools) for health
information exchange (HIE). The eTools for HIE component of this mega-analysis in diabetes care is summarized below in terms of
the diabetes cohort, intervention, economic analysis and projected benefit.
Diabetes Cohort – Adult patients with type-2 diabetes-related physician visits or one hospital
admission within two years between 2006 and 2011 were included as the Ontario
cohort. For each patient, their resource use and mean 90-day total costs by
sector were estimated from the Ontario administrative databases. These included
emergency department visits, acute inpatient and same-day surgery costs, other
hospital costs, long-term care, home care and physician visits, lab costs and
drug costs. The EQ-5D (European Quality of Life 5 Dimensions) values were used as the utility
estimates for changes in quality of life from hospitalizations during the study
period. The mean EQ-5D value of 0.77 derived from 3,192 patients in the UKPDS (Clarke, Gray, & Holman, 2002) was used as the baseline utility estimate for the Ontario cohort.
The mean EQ-5D value of 0.54 was used as a proxy measure for hospitalization, based on the
study on severe hypoglycemia in diabetics by Davis et al. (2005). Patients in
the Ontario cohort who were hospitalized were assigned the utility value of
0.54 over their average length of stay. For the intervention group, a 0.85
relative difference in hospitalization from an eTools for HIE field trial by Kahn, MacLean, and Littenberg (2010) was applied as a result of
improved quality of life, thereby reducing the proportion of patients
hospitalized.
Intervention – The Vermont Diabetes Information System (VDIS) developed by MacLean, Littenberg, and Gagnon (2006) was used as the model
eTool for HIE intervention. The VDIS is a decision support system that sends lab results, reminders and alerts to
primary care providers and their patients with diabetes. Quarterly population
reports were also available to providers for peer comparison. A randomized
trial by MacLean, Gagnon, Callas, and Littenberg (2009) showed that VDIS improved lab monitoring of diabetic patients in primary care but not
physiologic control. For cost, the VDIS vendor quoted a one-time software cost of $5,000 and an annual maintenance cost
of $2,500 per laboratory. The annual cost to receive VDIS information was $6,000 per physician and $48 per patient in 2012 Canadian
dollars. The per-patient costs were dependent on physician roster size and
disease prevalence. Since no eTools for HIE were in regular use in Ontario at the time of the mega-analysis, the proportion
of diabetic patients that could benefit from HIE was assumed to be 100%.
Economic Analysis – The projected cost-effectiveness of the modelled eTools for HIE in community-based care were examined. The respective economic evaluation
components are summarized below.
- Perspective – The Ontario provincial health ministry level (i.e., Ministry of Health and Long-Term Care);
- Options – Hypothetical adoption of VDIS as the eTools for HIE versus usual care with no HIE;
- Time Frame – A five-year horizon with an annual 5% discount rate inflated to 2012 Canadian dollars; duration of benefit assumed to be 32 months based on the literature;
- Input Costs – Estimated resource use costs with or without hospitalization for the Ontario cohort based on administrative data over a five-year period. Estimated one-time intervention costs covered and ongoing VDIS costs for 211 labs, 11,902 physicians and 85 diabetic patients per physician;
- Outcomes – Proportion of hospitalized patients based on severe hypoglycemia as a proxy measure from the literature and QALYs with or without hospitalization based on EQ-5D values as utility estimates from the literature;
- Comparison of Options – Cost-effectiveness analysis to compare eTools with usual care options in cost per patient, QALYs per patient, and ICER. Sensitivity analysis to compare changes in relative difference of hospitalization and emergency department visits, and marginal ongoing costs in the intervention group.
Projected Benefit – The cost-effectiveness analysis showed that the cost per patient was $29,889
with eTools versus $30,226 with usual care. The QALYs per patient was 2.795 with eTools versus 2.789 with usual care. The ICER was –$337 per patient. The sensitivity analysis showed the model was sensitive to
changes in resource use and intervention cost. For instance, a relative
difference of 0.75 in hospitalization for the intervention would change the ICER to –$1,228, where a relative difference of 0.95 would change the ICER to $654. A marginal cost of $74 in ongoing cost for the intervention would
change the ICER to –$724, but a marginal cost of $233 would change the ICER to $639. Overall, the intervention was found to be less costly and more
effective when compared with usual care.
18.4 System-wide HIV Testing in Veterans Affairs (VA) Administration
In 1998 the United States VA Administration launched the Quality Enhancement Research Initiative (QUERI) to improve the performance of the VA healthcare system and the consequent quality of care for its veterans (Smith & Barnett, 2008). In that initiative, QUERI researchers collaborated with VA leaders and staff to implement evidence-based practice as the routine standard
of care through a six-step process:
- Identify high-risk/volume diseases or problems.
- Identify best practices.
- Identify deviations from current practices and outcomes.
- Identify and implement interventions to promote best practices.
- Document that best practices improved outcomes.
- Document that outcomes were associated with improved health-related quality of life.
For step-4 above, the implementation efforts followed a sequence of phases from a single-site pilot project to a small-scale multisite trial, followed by a large-scale multi-region trial to a final system-wide rollout. An integral part of the initiative was the use of policy cost-effectiveness and budget impact analysis in single-site and multisite trials to determine the economic return. This section describes a case study on HIV testing at the VA Administration in terms of the multi-component intervention program, different implementation phases it went through over the years, and budget impact analysis done on the program.
18.4.1 Multi-component Intervention Program
The multi-component intervention was made up of computerized decision support,
audit-feedback, provider activation and organizational level change (Goetz et
al., 2008). The computerized decision support was a real-time clinical reminder
that identified patients at increased risk for HIV infections and prompted healthcare providers to offer HIV testing to these patients. The clinical reminder was triggered by the presence
of a set of predefined criteria such as prior Hepatitis B or C infection,
sexually transmitted disease, drug use, homelessness and specific behavioural
risk factors (e.g., excessive alcohol use, multiple sexual partners, body
piercing). These data elements were automatically extracted from the VA EMR during the patient visit. Once triggered, the provider had to address the
reminder by ordering an HIV test, asking the test to be done elsewhere, recording that the patient was
either not competent to consent to testing or had refused testing.
An audit-feedback system was developed to inform providers of their performance
in HIV evaluation and testing rates of at-risk patients at the clinic level. The
reports were distributed to clinical leaders and clinic managers via e-mail on
a quarterly basis. Provider activation included the use of academic detailing,
social marketing and educational materials to engage both providers and
patients in the initiative. Academic detailing involved one-on-one sessions in
person and ad-hoc site visits with project staff to discuss the need for and
benefits of HIV testing. Social marketing involved the recruitment of physician and nurse
leaders to encourage HIV testing at the clinic. Educational materials included information handouts,
pocket cards and posters to inform providers and patients on the need and
criteria for, and process and implications of HIV testing. Change at the organizational level involved the removal of barriers to
HIV testing, such as the inclusion of streamlined pretest counselling that only took
two to three minutes, and post-test phone notification and brief counselling of
negative test results.
18.4.2 Program Implementation and Evaluation
Goetz and colleagues (2008) conducted a pre-post intervention study from 2004 to
2006 to determine if the multi-component intervention program would increase
the rate of HIV testing. Five VA facilities took part in the study with two receiving the intervention and three
as controls. The HIV testing rate and the number of newly diagnosed cases in the year before and
after implementing the intervention were compared. Patient, provider and
facility-level factors that could influence testing performance were also
examined. These included patient subgroups with different demographics and risk
factors, proportions of at-risk patients tested by primary providers, as well
as the prevalence of at-risk patients and annual patient load at the facility.
The results showed 36,790 untested patients with HIV risk factors from the intervention sites and 44,577 patients from the control
sites were considered in the study. The adjusted rate of HIV testing at the two intervention sites increased from 4.8% to 10.8% and from
5.5% to 12.8%, and the number of newly diagnosed cases increased from 15 to 30
after implementing the intervention. There was no change in the control sites
during the same period. Overall the intervention was considered effective in
increasing the HIV testing rate and the detection of new cases.
Sustainability of the Intervention – Goetz et al. (2009) evaluated the sustainability of increased HIV testing after implementing the multi-component intervention program in 2005.
The intervention was implemented in month-1 of the intervention year 2005 then
continued for the subsequent 11 months. During the intervention year the study
team supported the provider activation component of the intervention that
included academic detailing, social marketing, and provision of educational
materials. In year-2, or the sustainability year, the responsibility for
provider activation was transferred to the clinic. During this period the
clinical reminders continued to be used, the quarterly feedback reports were
managed by clinical leaders, and provider education activities were reduced and
merged with regular staff meetings. Further organizational changes broadened
the number of providers who could order the test, eased the documentation
requirements and continued with the pretest and post-test counselling. The
results showed the monthly adjusted testing rate increased from 2% at baseline
to 6% by the end of the intervention year. Then the rate declined to 4% by the
end of the sustainability year. The testing rate for persons newly exposed to
the intervention increased during the intervention and sustainability years.
The attenuation effect in the sustainability year was caused by the increase in
the proportion of visits by untested patients despite prior exposures to the
intervention. The percentage of patients who received HIV testing was 5.0% in the pre-intervention year, 11.1% in the intervention year,
and 11.6% in the sustainability year. Overall, the intervention was considered
sustainable, especially in patients during their early contacts with the
healthcare system.
Scalability of the Intervention – Goetz and colleagues (2013) also evaluated the scalability of the
multi-component intervention in routine HIV testing and the level of support needed. A one-year three-arm
quasi-experimental study was conducted with central support, local support, and
no support (i.e., control) provided to different VA primary care sites in three geographic regions. All sites had access to the
real-time clinical reminder system. With central support, the study team
provided quarterly audit-feedback reports, provider activation and ongoing
support including site visits. With local support, the sites had only a single
conference call 30 days after the initial site visit. The control sites had no
contact with the study team. The clinical reminder was initially risk-based for
all sites in the first six months of the study, then became routine for all
patients in the following six months. In phase-1, the adjusted rate of
risk-based testing increased by 10.1%, 5.6% and 0.4% in the central, local and
control sites, respectively. In phase-2, the adjusted rate of routine testing
increased by 9.2%, 6.3% and 1.1% in the central, local and control sites. By
the end of the study, 70% to 80% of VA patients had been offered an HIV test. Overall, the multi-component intervention program was considered scalable
in reaching the goal of all VA patients being aware of their HIV status as part of routine clinical visits.
18.4.3 Budget Impact Analysis
Anaya, Chan, Karmarkar, Asch, and Goetz (2012) conducted a budget impact study
to examine the facility-specific costs of HIV testing and care for newly identified HIV patients. The study was based on the multi-component HIV intervention program discussed above, that was implemented as a pre-post
quasi-experimental trial in five Veterans Health Administration facilities
(Goetz et al., 2008). A budget impact model was developed to estimate the costs
of HIV testing that included the costs of pretest counselling, HIV testing rates, and treatment of identified HIV patients. The budget impact model, intervention, economic analysis and projected
benefits are summarized below.
Budget Impact Model – The model was developed to estimate the costs of HIV testing in a single VA facility in the primary care setting. Two HIV providers were consulted to establish relevant model end points. They covered
physician and nurse staffing costs, laboratory costs, and the costs of
antiretroviral therapy (ART) for different levels of HIV disease progression based on Cluster of Differentiation 4 (CD4) counts. The model included quarter-to-quarter changes in patient status, loss
to follow-up and deaths that occurred in a period. It covered the costs of
tested and untested patients of known and unknown HIV status who received care in a single facility over eight three-month periods. A
hypothetical cohort of 20,000 adult patients was used, with a prevalence of
9.2% as having already been tested, 200 as known HIV patients under care, three minutes of extra nursing time, and a 2.1% annual
baseline HIV testing rate in untested patients.
Intervention – The multi-component intervention program consisted of a real-time electronic
clinical reminder for HIV testing, audit-feedback reports, provider activation and patient-provider
education.
Economic Analysis – The budget impact of expanded HIV testing in a primary care setting were examined. The respective economic
evaluation components are summarized below.
- Perspective – The integrated VA healthcare system that offer both HIV testing and care;
- Options – Expanded HIV testing rate of 15% versus baseline rate of 2.1%;
- Time Frame – A two-year horizon in eight three-month quarterly periods;
- Input Costs – Personnel and laboratory costs, and ART costs from different levels of HIV disease progression based on CD4 count, tracked on a quarterly basis;
- Outcomes – HIV testing rates, number and percent of HIV-positive patients at different CD4 levels;
- Comparison of Options – Budget impact on expanded HIV testing from 2.1% to 15% at 0.45% positive test rate; sensitivity analysis with HIV testing rates from 15% to 30%, positive test rate from 0.45% to 1%, and pretest nursing time activities from three to five minutes.
Projected Benefit – The expansion of HIV testing from 2.1% to 15% annually led to the identification of 21 additional HIV-positive patients over two years at a cost of $290,000. Over 60% of this cost
was to provide ART to newly diagnosed patients. Quarterly ART costs increased from $10,000 to more than $60,000 over two years with more HIV patients identified and treated with ART. In sensitivity analysis, serodiagnostic and annual HIV testing rates had the greatest cost impact. Overall, expanded HIV testing led to increased initial costs, mostly due to ART treatment for new patients. Using a $50,000 per QALY threshold, expanded HIV testing was cost-effective based on a total cost of $80,000 over two years for
testing, and $290,000 for testing and care for 21 additional HIV patients.
18.5 Summary of Economic Evidence in eHealth
Overall, our meta-synthesis of the three published eHealth economic evaluation
reviews showed that there is value for money in eHealth investment. However,
the evidence varied depending on the domains, contexts and systems involved.
This evidence is strong in primary care EMR as all seven full economic analysis studies had positive returns. For CPOE/CDS, institutional EHR, disease management, immunization and HIE systems, while there is evidence of positive returns it is much weaker since
they are only based on a small number of modelling and field studies. For
medication management and documentation systems, the evidence is weak to
inconclusive since the positive return is contingent on the interplay of
different socio-organizational, technical and external factors.
The development and validation of the ODEM and its application in the COMPETE-II and HIE studies in Ontario, Canada showed that computer-supported diabetes care could
be cost-effective but required a great deal of effort to implement and maintain
the interventions. With the electronic diabetes tracker, there was a modest
benefit in achieving process outcome targets in the short term, and some gain
in QALYs with reduced complications in the long term. However, the projected economic
return was contingent on the precision of the ODEM parameter estimates such as the disease prevalence, resource use and costs,
complication rates, and provider EMR adoption behaviours. The HIE modelling study was cost-effective in sharing patient information but it
assumed 100% adoption of the eTools by all primary care providers in the
province. Similarly, the multi-component HIV testing program in Veterans Affairs Administration in the United States showed
that computerized HIV testing was cost-effective when combined with patient-provider activation and
organizational policies. Once implemented, the risk-based testing program was
shown to be sustainable with more streamlined support and eventually scalable
as a routine practice in the organization. The ICER and gain in QALYs were considered good return on value.
References
Anaya, H. D., Chan, K., Kamarkar, U., Asch, S. M., & Goetz, M. B. (2012). Budget impact analysis of HIV testing in the VA healthcare system. Value in Health, 15(8), 1022–1028.
Bassi, J., & Lau, F. (2013). Measuring value for money: A scoping review on economic
evaluation of health information systems. Journal of American Medical Informatics Association, 20(4), 792–801.
Clarke, P., Gray, A., & Holman, R. (2002). Estimating utility values for health states of type 2
diabetic patients using the EQ-5D (UKPDS 62). Medical Decision Making, 22(4), 340–349.
Clarke, P. M., Gray, A. M., Briggs, A., Farmer, A. J., Fenn, P., Stevens, R. J.,
… UK Prospective Diabetes Study (UKDPS) Group. (2004). A model to estimate the lifetime health outcomes of patients
with type 2 diabetes: the United Kingdom Prospective Diabetes Study (UKPDS) outcomes model (UKPDS no. 68). Diabetologia, 47(10), 1747–1759.
Davis, R. E., Morrissey, M., Peters, J. R., Wittrup-Jensen, K., Kennedy-Martin,
T., & Currie, C. J. (2005). Impact of hypoglycaemia on quality of life and
productivity in type 1 and type 2 diabetes. Current Medical Research and Opinion, 21(9), 1477–1483.
Fretheim, A., Aaserud, M., & Oxman, A. D. (2006). Rational prescribing in primary care (RaPP): economic
evaluation of an intervention to improve professional practice. PLoS Medicine, 3(6), e216.
Goetz, M. B., Hoang, T., Bowman, C., Knapp, H., Rossman, B., Smith, R., … Asch, S. M. (2008). A system-wide intervention to improve HIV testing in the Veterans health administration. Journal of General Internal Medicine, 23(8), 1200–1207.
Goetz, M. B., Hoang, T., Henry, R., Knapp, H., Anaya, H. D., Gifford, A. L., & Asch, S. M. (2009). Evaluation of the sustainability of an intervention to
increase HIV testing. Journal of General Internal Medicine, 24(12), 1275–1280.
Goetz, M. B., Hoang, T., Knapp, H., Burgess, J., Fletcher, M. D., Gifford, A.
L., & Asch, S. M. (2013). Central implementation strategies outperform local ones in
improving HIV testing in Veterans healthcare administration facilities. Journal of General Internal Medicine, 28(10), 1311–1317.
Health Quality Ontario. (2013). Optimizing chronic disease management
mega-analysis: Economic evaluation. Ontario Health Technology Assessment Series, 13(13), 1–148.
Holbrook, A., Thabane, L., Keshavjee, K., Dolovich, L., Bernstein, B., Chan, D.,
… Gerstein, H. (2009). Individualized electronic decision support and reminders
to improve diabetes care in the community: COMPETE II randomized trial. Canadian Medical Association Journal, 181(102), 37–44.
Jones, S. S., Rudin, R. S., Perry, T., & Shekelle, P. G. (2014). Health information technology: An updated systematic
review with a focus on meaningful use. Annals of Internal Medicine, 160(1), 48–54.
Khan, S., MacLean, C. D., & Littenberg, B. (2010). The effect of the Vermont diabetes information system on
inpatient and emergency department use: Results from a randomized trial. Health Outcomes in Research in Medicine, 1(1), e61–e66.
Kopach, R., Geiger, G., & Ungar, W. J. (2005). Cost-effectiveness analysis of medical documentation
alternatives. International Journal of Technology Assessment in Health Care, 21(1), 126–131.
Low, A. F. H., Phillips, A. B., Ancker, J. S., Patel, A. R., Kern, L. M., & Kaushal, R. (2013). Financial effects of health information technology: a
systematic review. American Journal of Managed Care, 19(10 Spec No), SP369–SP376.
MacLean, C. D., Littenberg, B., & Gagnon, M. (2006). Diabetes decision support: Initial experience with the
Vermont diabetes information system. American Journal of Public Health, 96(4), 593–595.
MacLean, C. D., Gagnon, M., Callas, P., & Littenberg, B. (2009). The Vermont diabetes information system: A cluster
randomized trial of a population-based decision support system. Journal of General Internal Medicine, 24(12), 1303–1310.
O’Reilly, D., Holbrook, A., Blackhouse, G., Troyan, S., & Goeree, R. (2012). Cost-effectiveness of a shared computerized decision support
system for diabetes linked to electronic medical records. Journal of American Medical Informatics Association, 19(3), 341–345.
O’Reilly, D., Tarride, J. E., Goeree, R., Lokker, C., & McKibbon, K. A. (2012). The economics of health information technology in
medication management: a systematic review of economic evaluations. Journal of American Medical Informatics Association, 19(3), 423–438.
Smith, M. W., & Barnett, P. G. (2008). The role of economics in the QUERI program: QUERI series. Implementation Science, 3(20). doi: 10.1186/1748-5908-3-20
Wu, R. C., Laporte, A., & Ungar, W. J. (2007). Cost effectiveness of an electronic medication ordering
and administration system in reducing adverse drug events. Journal of Evaluation in Clinical Practice, 13(3), 440–448.
EPUB
Manifold uses cookies
We use cookies to analyze our traffic. Please decide if you are willing to accept cookies from our website. You can change this setting anytime in Privacy Settings.