Skip to main content



“Chapter 14: Methods for eHealth Economic Evaluation Studies” in “Handbook of eHealth Evaluation: An Evidence-based Approach”
Chapter 14
Methods for eHealth Economic Evaluation Studies
Francis Lau
14.1 Introduction
A plethora of evaluation methods have been used to examine the economic return
of eHealth investments in the literature. These methods offer different ways of
determining the “value for money” associated with a given eHealth system that are often based on specific
assumptions and needs. However, this diversity has created some ambiguity with
respect to when and how one should choose among these methods, ways to maintain
the rigour of the process and its reporting, while ensuring relevance of the
findings to the organization and stakeholders involved.
This chapter reviews the economic evaluation methods that are used in
healthcare, especially those that have been applied in eHealth. It draws on the
eHealth Economic Evaluation Framework discussed in chapter 5 by elaborating on
the common underlying design, analysis and reporting aspects of the methods
presented. In so doing, a better understanding of when and how these methods
can be applied in real-world settings is gained. Note that it is beyond the
scope of this chapter to describe all known economic evaluation methods in
detail. Rather, its focus is to introduce selected methods and the processes
involved from the eHealth literature. The Appendix to this chapter presents a
glossary of relevant terms with additional reference citations for those
interested in greater detail on these methods.
Specifically, this chapter describes the types of eHealth economic evaluation
methods reported, the process for identifying, measuring and valuating costs
and outcomes and assessing impact, as well as best practice guidance that has
been published. Three brief exemplary cases have been included to illustrate
the types of eHealth economic evaluation used and their implication on
practice.
14.2 eHealth Economic Evaluation Methods
The basic principle behind economic evaluation is the examination of the costs
and outcomes associated with each of the options being considered to determine
if they are worth the investment (Drummond, Sculpher, Torrance, O’Brien, & Stoddart, 2005). For eHealth it is the compilation of the resources required to
adopt a particular eHealth system option and the consequences derived or
expected from the adoption of that system. While there are different types of
resources involved they are always expressed in monetary units as the cost.
Consequences will depend upon the natural units by which the outcomes are
measured and whether they are then aggregated and/or converted into a common
unit for comparison.
The type of economic analysis is influenced by how the costs and outcomes are
handled. In cost-benefit analysis both the costs and outcomes of the options
are expressed and compared in a monetary unit. In cost-effectiveness analysis
there is one main outcome that is expressed in its natural unit such as the
readmission rate. In cost-consequence analysis there are multiple outcomes
reported in their respective units without aggregation such as the readmission
rate and hospital length of stay. In cost-minimization analysis the least-cost
option is selected assuming all options have equivalent outcomes. In
cost-utility analysis the outcome is based on health state preference values
such as quality-adjusted life years. Regardless of the type of analysis used,
it is important to determine the incremental cost of producing an additional
unit of outcome from the options being considered.
Economic evaluation can be done through empirical or modelling studies. In
empirical studies, actual cost and outcome data, sometimes supplemented with
estimates, are collected as part of a field trial such as a randomized
controlled study to determine the impact of an eHealth system. The economic
impact is then analyzed and reported alongside the field trial result, which is
the clinical impact of the system under consideration. In modelling studies,
cost and outcome data are extracted from internal and/or published sources,
then analyzed with such decision models as Monte Carlo simulation or logistic
regression to project future costs and outcomes over a specified time horizon.
Some studies combine both the field trial and modelling approaches by applying
the empirical data from the trial to make long-term modelling projections.
Regardless of the study design, the evaluation perspective, data sources, time
frame, options, and comparison method need to be explicit to ensure the rigour
and generalizability of the results.
Two other economic evaluation methods used by healthcare organizations in
investment decisions are budget impact analysis and priority setting through
program budgeting, and marginal analysis. While these two methods are often
used by key stakeholder groups in investment and disinvestment decisions across
a wide range of healthcare services and programs based on overall importance,
they are seldom seen in the eHealth literature. Even so, it is important to be
aware of these methods and their implications in eHealth.
14.3 Determining Costs, Outcomes and Importance
The process of determining the costs, outcomes and importance of an eHealth
system are an integral part of any economic evaluation that needs to be made
explicit. The process involves the identification of relevant costs and
outcomes, the collection and quantification of costs and outcomes from
different data sources, appraisal of their monetary value, and examination of
the budgetary impact and overall importance of the eHealth system on the
organization and its stakeholder groups (Simoens, 2009). The process is
described below.
14.3.1 Identification of Costs and Outcomes
The process of identifying costs and outcomes in eHealth economic evaluation
involves the determination of the study perspective, time frame, and types of
costs and outcomes to be included (Bassi & Lau, 2013). Perspective is the viewpoint from which the evaluation is being
considered, which can be individual, organizational, payer, or societal in
nature. Depending on the perspective, certain costs and outcomes may be
irrelevant and excluded from the evaluation. For instance, from the perspective
of general practitioners who work under a fee-for-service arrangement, the
change in their patients’ productivity or quality of life may have little relevance to the return on
investment of the EMR in their office practice. On the other hand, when the EMR is viewed from a societal perspective, any improvement in the overall population’s work productivity and health status is considered a positive return on the
investment made.
Since the costs and outcomes associated with the adoption of an eHealth system
may accrue differently over time, one has to ensure the time frame chosen for
the study is of sufficient duration to capture all of the relevant data
involved. For instance, during the implementation of a system there can be
decreased staff productivity due to the extra workload and learning required.
Similarly, there is often a time delay before the expected change in outcomes
can be observed, such as future cost savings through reduced rates of
medication errors and adverse drug events after the adoption of a CPOE system. As such, the extraction of the costs and outcomes accrued should extend
beyond the implementation period to allow for the stabilization of the system
to reach the point at which the change in outcomes is expected to occur.
The types of costs and outcomes to be included in an eHealth economic evaluation
study should be clearly defined at the outset. The types of costs reported in
the eHealth literature include one-time direct costs, ongoing direct costs, and
ongoing indirect costs. Examples of one-time direct costs are hardware,
software, conversion, training and support. Examples of ongoing direct costs
are system maintenance and upgrade, user/technical support and training.
Examples of ongoing indirect costs are prorated IT management costs and changes in staff workload. The types of outcomes include
revenues, cost savings, resource utilization, and clinical/health outcomes.
Examples of revenues are money generated from billing and payment of services
provided through the system and changes in financial arrangements such as
reimbursement rates and accounts receivable days. Examples of labour, supply
and capital savings are changes in staffing and supply costs and capital
expenditures after system adoption. Examples of health outcomes are changes in
patients’ clinical conditions and adverse events detected. Note that the outcomes
reported in the eHealth literature are mostly tangible in nature. There are
also intangible outcomes such as patient suffering and staff morale affected by
eHealth systems but they are difficult to quantify and are seldom addressed.
For detailed lists of cost and outcome measures and references, refer to the
additional online material (Appendices 9 and 10, respectively) in Bassi and Lau
(2013).
14.3.2 Measurement of Costs and Outcomes
When measuring costs and outcomes, one needs to consider the costing approach,
data sources and analytical methods used. Costing approach refers to the use of
micro-costing versus macro-costing to determine the costs and outcomes in each
eHealth system option (Roberts, 2006). Micro-costing is a detailed bottom-up
accounting approach that measures every relevant resource used in system
adoption. Macro-costing takes a top-down approach to provide gross estimates of
resource use at an aggregate level without the detail. For instance, to measure
the cost of a CPOE system with micro-costing, one would compile all of the relevant direct,
indirect, one-time and ongoing costs that have accrued over the defined time
period. With macro-costing, one may assign a portion of the overall IT operation budget based on some formula as the CPOE cost. While micro-costing is more precise in determining the detailed costs and
outcomes for a system, it is a time-consuming and context-specific approach
that is expensive and, hence, less generalizable than macro-costing.
The sources of cost and outcome data can be internal records, published reports
and expert opinions. Internal records can be obtained retrospectively from
historical data such as financial statements and patient charts, or
prospectively from resource use data collected in a field study. Published
reports are often publicly available statistics such as aggregate health
expenditures reported at the regional or national level, and established
disease prevalence rates at the community or population level. Expert opinions
are ways to provide estimates through consensus when it is impractical to
derive the actual detailed costs and outcomes, or to project future benefits
not yet realized such as the extent of reduced medication errors expected from
a CPOE system (Bassi & Lau, 2013, Table 4).
The analytical methods used to measure costs and outcomes can be based on
accounting, statistical or operations research approaches. The accounting
approach uses cost accounting, managerial accounting and financial accounting
methods to determine the costs and outcomes of the respective system options.
The statistical approach uses such methods as logistic regression, general
linear/mixed model and inferential testing for group differences (e.g., t-test, chi-square and odds ratio) to determine the presence and magnitude of the
differences in costs and outcomes that exist among the options being
considered. The operations research approach uses such methods as panel
regression, parametric cost analysis, stochastic frontier analysis and
simulation to estimate the direction and magnitude of projected changes in
costs and outcomes for each of the options involved (Bassi & Lau, 2013, Table 4).
14.3.3 Valuation of Costs and Outcomes
Valuation is the determination of the monetary value of the costs and outcomes
associated with the options being considered (Simoens, 2009). The key concepts
in valuation when comparing the worth of each option are the notions of
uncertainty, discounting, present value, inflation, and opportunity cost. These
concepts are briefly outlined below.
- Uncertainty refers to the degree of imprecision in the costs and outcomes of the options. Such uncertainty can arise from the selected analytical methods, data samples, end point extrapolations and generalization of results. A common approach to handling uncertainty is through sensitivity analysis where a range of cost, outcome and other parameter estimates (e.g., time frame, discount rate) are applied to observe the direction and magnitude of change in the results (Brennan & Akehurst, 2000).
- Discounting is the incorporation of the time value of money into the costs and outcomes for each option being considered. It is based on the concept that a dollar is worth less tomorrow than today. Therefore discounting allows the calculation of the present value of costs and outcomes that can accrue differently over time. The most common discount rates found in the literature are between 3% and 5%. Often, a sensitivity analysis is performed by varying the discount rates to observe the change in results (Roberts, 2006).
- Present value (PV) is the current worth of a future sum of money based on a particular discount or interest rate. It is used to compare the expected cash flow for each of the options as they may accrue differently over time. A related term is net present value (NPV), which is the difference between the present value of the cash inflow and outflow in an option. When deciding among options, the PV or NPV with the highest value should be chosen (Roberts, 2006).
- Inflation is the sustained increase in the general price level of goods and services measured as an annual percentage increase called the inflation rate. In economic evaluation, the preferred approach is to use constant dollars and a small discount rate without inflation (known as the real discount rate). If the cost items inflate at different rates, the preferred approach is to apply different real discount rates to individual items without inflation (Drummond, Sculpher, et al., 2005).
- Opportunity cost is the foregone cost or benefit that could have been derived from the next best option instead of the one selected. When considering opportunity cost we are concerned with the incremental increases in healthcare budgets with alternative options and not the opportunity cost incurred elsewhere in the economy. One way to identify opportunity cost is to present healthcare and non-healthcare costs and benefits separately (Drummond, Sculpher, et al., 2005).
When attaching monetary values to costs and outcomes, one should apply current
and locally relevant unit costs and benefits. The preference is to use
published data sources from within the organization or region where the
economic evaluation is done. If these sources are not available, then other
data may be used but they should be adjusted for differences in price year and
currency where appropriate. For discounting it should be applied to both costs
and outcomes using the same discount rate. The reporting of undiscounted costs
and outcomes should be included to allow comparison across contexts as local
discount rates can vary. Where there is uncertainty in the costs and outcomes,
sensitivity analysis should be included to assess their effects on the options
(Brunetti et al., 2013).
14.3.4 Budget Impact and Priority Setting
Budget impact and priority setting relate to the overall importance of the
respective investment decisions to the organization and its key stakeholder
groups. In budget impact analysis, the focus is on the financial consequences
of introducing a new intervention in a specific setting over a short to medium
term. It takes on the perspective of the budget holder who has to pay for the
intervention, with the alternative being the current practice, or status quo.
In the analysis only direct costs are included typically over a time horizon of
three years or less without discounting. For effectiveness, only short-term
costs and savings are measured and the emphasis is on marginal return such as
the incremental cost-effectiveness ratio that quantifies the cost for each
additional unit of outcome produced. Sensitivity analysis is often included to
demonstrate the impact of different scenarios and extreme cases (Garattini & van de Vooren, 2011).
In priority setting, program budgeting and marginal analysis is used to ensure
optimal allocation of the limited resources available in the organization based
on overall priorities. There are two parts to this analysis. The first part is
program budgeting that is a compilation of the resources and expenditures
allocated to existing services within the organization. The second part is
marginal analysis where recommendations on investment of new services and
disinvestment of existing services are made based on a set of predefined
criteria by key stakeholders in the organization. An example is the
multi-criterion decision analysis where a performance matrix is used to compare
and rank options based on a set of policy-relevant criteria such as
cost-effectiveness, disease severity, and affected population. The process
should be supported by hard and soft evidence, and reflect the values and
preferences of the stakeholder groups that are affected, for example the local
population (Tsourapas & Frew, 2011; Baltussen & Niessen, 2006; Mitton & Donaldson, 2004).
14.4 Best Practice Guidance
The scoping review by Bassi and Lau (2013) of 42 published eHealth economic
evaluation studies has found a lack of consistency in their design, analysis
and reporting. Such variability can affect the ability of healthcare
organizations in making evidence-informed eHealth investment decisions. At
present there is no best practice guidance in eHealth economic evaluation, but
there are two health economic evaluation standards that we can draw on for
guidance. These are the Consensus on Health Economic Criteria (CHEC) list and the Consolidated Health Economic Evaluation Reporting Standards (CHEERS) checklist. They are described below.
14.4.1 CHEC List
The Consensus on Health Economic Criteria (CHEC) was published as a checklist to assess the methodological quality of economic
evaluation studies in systematic reviews (Evers, Goossens, de Vet, van Tulder, & Ament, 2005). The list was created from an initial pool of items found in the
literature, then reduced with three Delphi rounds by 23 international experts.
The final list had 19 items, which are shown below (source: Table 1 in Evers et
al., 2005, p. 243).
- Is the study population clearly described?
- Are competing alternatives clearly described?
- Is a well-defined research question posed in answerable form?
- Is the economic study design appropriate to the stated objective?
- Is the chosen time horizon appropriate to include relevant costs and consequences?
- Is the actual perspective chosen appropriate?
- Are all important and relevant costs for each alternative identified?
- Are all costs measured appropriately in physical units?
- Are costs valued appropriately?
- Are all important and relevant outcomes for each alternative identified?
- Are all outcomes measured appropriately?
- Are outcomes valued appropriately?
- Is an incremental analysis of costs and outcomes of alternatives performed?
- Are all future costs and outcomes discounted appropriately?
- Are all the important variables, whose values are uncertain, appropriately subjected to sensitivity analysis?
- Do the conclusions follow from the data reported?
- Does the study discuss the generalizability of the results to other settings and patient/client groups?
- Does the article indicate that there are no potential conflicts of interest of study researchers and funders?
- Are ethical and distributional issues discussed appropriately?
The authors emphasized that the CHEC list should be regarded as a minimal set of items when used to appraise an
economic evaluation study in a systematic review. The additional guidance from
the authors is: (a) having two or more reviewers and starting with a pilot when
conducting the systematic review to increase rigour; (b) the items are
subjective judgments of the quality of the study under review; and (c) journal
publications should be accompanied by a detailed technical evaluation report.
14.4.2 CHEERS Checklist
The Consolidated Health Economic Evaluation Reporting Standards (CHEERS) checklist was published in 2013 by the International Society for
Pharmacoeconomics and Outcomes Research (ISPOR) Health Economic Evaluation Publication Guidelines Good Reporting Practices
Task Force (Husereau et al., 2013). Its purpose was to provide recommendations
on the optimized reporting of health economic evaluation studies. Forty-four
items were collated initially from the literature and reviewed by 47
individuals from academia, clinical practice, industry and government through
two rounds of the Delphi process. A final list of 24 items with accompanying
recommendations was compiled into six categories. They are summarized below.
- Title and abstract – two items on having a title that identifies the study as an economic evaluation, and a structured summary of objectives, perspective, setting, methods, results and conclusions.
- Introduction – one item on study context and objectives, including its policy and practice relevance.
- Methods – 14 items on target populations, setting, perspective, comparators, time horizon, discount rate, choice of health outcomes, measurement of effectiveness, measurement and valuation of preference-based outcomes, approaches for estimating resources and costs, currency and conversion, model choice, assumptions, and analytic methods.
- Results – four items on study parameters, incremental costs and outcomes, describing uncertainty in sampling and assumptions, and describing potential heterogeneity in study parameters (e.g., patient subgroups).
- Discussion – one item on findings, limitations, generalizability and current knowledge.
- Others – two items on source of study funding and conflicts of interest.
14.5 Exemplary Cases
This section contains three examples of eHealth economic evaluation studies that
applied different approaches to determine the economic return on the investment
made. The examples cover cost-benefit analysis, cost-effectiveness analysis,
and simulation modelling. Readers interested in budget impact analysis may
refer to the following:
- Fortney, Maciejewski, Tripathi, Deen, and Pyne (2012) on telemedicine-based collaborative care for depression.
- Anaya, Chan, Karmarkar, Asch, and Goetz (2012) on facility cost of HIV testing for newly identified HIV patients.
14.5.1 Cost-benefit of EMR in Primary Care
Wang and colleagues (2003) conducted a cost-benefit study to examine the
financial impact of EMR on their organization in the ambulatory care setting. The identified data sources were cost and benefit data from the internal record, expert opinion and
published literature. A five-year time horizon was used to cover all relevant
costs and benefits. The resource use measured was the net financial cost or benefit per physician over five years. The valuation of resource use was the present value of net benefit or cost over five years based on historical
data and expert estimates in 2002 U.S. dollars at a 5% discount rate.
The study findings showed the estimated net benefit was $86,400 per provider
over five years. The benefits were from reduced drug expenditures and billing
errors, improved radiology test utilization and increased charge capture.
One-way sensitivity analysis showed net-benefit varied from $8,400 to $140,100
depending on the proportion of patients with care capitation. Five-way
sensitivity analysis with most pessimistic and optimistic assumptions showed
$2,300 net cost to $330,900 net benefit. This study showed EMR in primary care can lead to a positive financial return depending on the
reimbursement mix.
14.5.2 Cost-effectiveness of Medication Ordering/Administration in Reducing
Adverse Drug Events
Wu, Laporte, and Ungar (2007) conducted a cost-effectiveness study to examine
the costs of adopting a medication ordering and administration system and its
potential impact on reducing adverse drug events (ADEs) within the organization. The identified data sources were system and workload costs from internal records and expert opinion, and
estimated ADE events from the literature. The resource use measured were annual cost and ADE rate projected over 10 years. The valuation of resource use was the annual system and workload costs based on historical data and expert
estimates as net present value in 2004 Canadian and U.S. dollars at 5% discount rates.
The study findings showed the incremental cost-effectiveness of the new system
was $12,700 USD per ADE prevented. Sensitivity analysis showed cost-effectiveness to be sensitive to
the ADE rate, cost of the system, effectiveness of the system, and possible costs from
increased physician workload.
14.5.3 Simulation Modelling of CPOE Implementation and Financial Impact
Ohsfeldt et al. (2005) conducted a simulation study on the cost of implementing CPOE in rural state hospitals and the financial implications of statewide
implementation. The identified data sources included existing clinical information system (CIS) status from a hospital mail survey, patient care revenue and hospital
operating cost data from the statewide hospital association, and vendor CPOE cost estimates. The resource use measured was the net financial cost or benefit per physician over five years. The valuation of resource use was the operating margin present value of net benefit or cost over five and 10
years based on historical data and expert estimates in 2002 U.S. dollars at a 5% discount rate. Quadratic interpolation models were used to
derive low and high cost estimates based on bed size and CIS category. Comparison of operating margins for first and second year post-CPOE across hospital types was done with different interest rates, depreciation
schedules, third party reimbursements and fixed/marginal cost scenarios.
The study findings showed CPOE led to substantial operating costs for rural and critical access hospitals
without substantial cost savings from improved efficiency or patient safety.
The cost impact was less but still dramatic for urban and rural referral
hospitals. For larger hospitals, modest benefits in cost savings or revenue
enhancement were sufficient to offset CPOE costs. In conclusion, statewide CPOE adoption may not be financially feasible for small hospitals without increased
payments or subsidies from third parties.
14.6 Implications
The eHealth economic evaluation methods described in this chapter have important
implications for policy-makers and researchers involved with the planning,
adoption and evaluation of eHealth systems. First, it is important to have a
basic understanding of the principles and application of different eHealth
economic evaluation methods as their selection is often based on a variety of
contexts, perspectives and assumptions. Second, when conducting an eHealth
economic evaluation it is important to be explicit in describing the
identification, measurement and valuation steps to ensure all of the important
and relevant costs and outcomes are included and handled appropriately. Third,
to ensure rigour and to increase the generalizability of the eHealth economic
evaluation study findings, one should adhere to the best practice guidance in
their design, analysis and reporting.
To ensure rigour one should be aware of and avoid the common “methodological flaws” in the design, analysis and reporting of economic evaluation studies, as
cautioned by Drummond and Sculpher (2005). The common design flaws are the
omission of important and relevant costs and outcomes and the inclusion of
inappropriate options for comparison, such as unusual local practice patterns
in usual care, which can lead to incomplete and erroneous results. The common
flaws in data collection and analysis are the problems of making indirect
clinical comparisons, inadequate representation of the underlying effectiveness
data, inappropriate extrapolation beyond the time period of the study,
over-reliance on assumptions, and inadequate handling of uncertainty. For
instance, the presence of major baseline group differences across the options
would make the results incomparable. The common flaws in reporting are the
inappropriate aggregation of results, inclusion of only the average
cost-effectiveness ratios, inadequate handling of generalizability, and
selective reporting of the findings. In particular, the reporting of average
cost-effectiveness ratios based on total costs divided by total effects is
common in the eHealth literature and can be misleading since it does not show
the incremental cost involved to produce an extra unit of outcome.
The generalizability of eHealth economic evaluation study findings can be
increased by drawing on the recommendations of the National Health Service
Health Technology Assessment Programme in the United Kingdom on the design,
analysis and reporting of economic evaluations (Drummond, Manca, & Sculpher, 2005). For trial-based studies, the design should ensure the
representativeness of the study sites and patients, the relevance of the
options for comparison, the ability to include different perspectives, the
separation of resource use data from unit costs or pricing, and the use of
health state preferences that are relevant to the populations being studied.
The analysis of multi-location/centre trials should test for the homogeneity of
the data prior to pooling of the results to avoid the clustering of treatment
effects. The reporting of trial-based results should include the
characteristics of the study sites supplemented with a detailed technical
report to help the readers better understand the contexts and decide if the
findings are relevant to their organizations.
For model-based studies, the design should be clear in specifying the decision
problem and options, identifying the stakeholders to be informed by the
decision model, and ensuring the modelling approaches are relevant to the
stakeholders (e.g., the perspective and objective function). The analysis of
model-based trials should justify its handling of the cost, resource use,
effectiveness and preference value data, especially when there is uncertainty
and heterogeneity in the data across groups, locations and practices. The
reporting of model-based results should include the justifications of the
parameter inputs to the model to ensure they are appropriate and relevant to
the stakeholders. Any pre-analysis done on the input data so they can be
incorporated into the model should be explained to justify its relevance.
14.7 Summary
This chapter described the different methods that are used in eHealth economic
evaluation. The methods cover different analytical approaches and the process
for resource costing and determining the outcomes. There are also published
best practice standards and guidelines that should be considered in the design,
analysis and reporting of eHealth economic evaluation studies. The three case
studies provide examples of how the economic evaluation of eHealth systems is
done using select methods.
References
Anaya, H. D., Chan, K., Karmarkar, U., Asch, S. M., & Goetz, M. B. (2012). Budget impact analysis of HIV testing in the VA healthcare system. Value in Health, 15(8), 1022–1028.
Baltussen, R., & Niessen, L. (2006). Priority setting of health interventions: the need for
multi-criteria decision analysis. Cost Effectiveness and Resource Allocation, 4, 14. doi: 10.1186/1478-7547-4-14
Bassi, J., & Lau, F. (2013). Measuring value for money: A scoping review on economic
evaluation of health information systems. Journal of American Medical Informatics Association,20(4), 792–801.
Brennan, A., & Akehurst, R. (2000). Modelling in health economic evaluation. What is its
place? What is its value? Pharmacoeconomics, 17(5), 445–459.
Brunetti, M., Shemilt, I., Pregno, S., Vale, L., Oxman, A. D., Lord, J., … Schunemann, H. J. (2013). GRADE guidelines: 10. Considering resource use and rating the quality of economic
evidence. Journal of Clinical Epidemiology,66(2), 140–150.
Drummond, M. F., Sculpher, M. J., Torrance, G. W., O’Brien, B. J., & Stoddart, G. L. (2005). Methods for the economic evaluation of health care programmes (3rd ed.). Oxford: Oxford University Press.
Drummond, M., & Sculpher, M. (2005). Common methodological flaws in economic evaluations. Medical Care,43(7 suppl), 5–14.
Drummond, M., Manca, A., & Sculpher, M. (2005). Increasing the generalizability of economic evaluations:
Recommendations for the design, analysis and reporting of studies. International Journal of Technology Assessment in Health Care,21(2), 165–171.
Evers, S., Goossens, M., de Vet, H., van Tulder, M., & Ament, A. (2005). Criteria list for assessment of methodological quality of
economic evaluations: Consensus on health economic criteria. International Journal of Technology Assessment in Health Care,21(2), 240–245.
Fortney, J. C., Maciejewski, M. L., Tripathi, S. P., Deen, T. L., & Pyne, J. M. (2011). A budget impact analysis of telemedicine-based
collaborative care for depression. Medical Care, 49(9), 872–880.
Garattini, L., & van de Vooren, K. (2011). Budget impact analysis in economic evaluation: a
proposal for a clearer definition. European Journal of Health Economics, 12(6), 499–502.
Husereau, D., Drummond, M., Petrou, S., Carswell, C., Moher, D., Greenberg, D., … Loder, E. (2013). Consolidated health economic evaluation reporting standards (CHEERS) – Explanation and elaboration: A report of the ISPOR health economic evaluation publication guidelines good reporting practices task
force. Value in Health,16, 231–250.
Mitton, C., & Donaldson, C. (2004). Health care priority setting: principles, practice and
challenges. Cost Effectiveness and Resource Allocation, 2, 3. doi: 10.1186/1478-7547-2-3
Ohsfeldt, R. L., Ward, M. M., Schneider, J. E., Jaana, M., Miller, T. R., Lee,
Y., & Wakefield, D. S. (2005). Implementation of hospital computerized physician
order entry systems in a rural state: Feasibility and financial impact. Journal of American Medical Informatics Association, 12(1), 20–27.
Roberts, M. S. (2006). Economic aspects of evaluation. In C. P. Friedman & J. C. Wyatt (Eds.), Evaluation methods in biomedical informatics (2nd ed., pp. 301–337). New York: Springer.
Simoens, S. (2009). Health economic assessment: a methodological primer. International Journal of Environmental Research and Public Health,6(12), 2950–2966.
Tsourapas, A., & Frew, E. (2011). Evaluating “success” in programme budgeting and marginal analysis: a literature review. Journal of Health Services Research & Policy, 16(3), 177–183.
Wang, S. J., Middleton, B., Prosser, L. A., Bardon, C. G., Spurr, C. D.,
Carchidi, P. J., … Bates, D. W. (2003). A cost-benefit analysis of EMR in primary care. American Journal of Medicine, 114(5), 387–403.
Wu, R. C., Laporte, A., & Ungar, W. J. (2007). Cost-effectiveness of an electronic medication ordering
and administration system in reducing adverse drug events. Journal of Evaluation in Clinical Practice, 13(3), 440–448.
Appendix
Glossary of Terms
References for Appendix
AcqNotes. (n.d.). Retrieved from http://www.acqnotes.com/Tasks/Parametric%20Cost%20Estimating%20.html
Baltagi, B. H. (2011). Econometrics (5th ed.). New York: Springer.
Carey, K., Burgess, J. F., & Young, G. J. (2008). Specialty and full service hospitals: A comparative cost
analysis. Health Research and Educational Trust, 43(5, Part II), 1869–1887. doi: 10.1111/j.1475-6773.2008.00881.x
Chisholm, D. (1998). Economic analyses. International Review of Psychiatry,10(4), 323–330.
Cnaan, A., Laird, N. M., & Slasor, P. (1997). Using the general linear mixed model to analyse unbalanced
repeated measures and longitudinal data. Statistics in Medicine, 16(20), 2349–2380.
Dawson, B., & Trapp, R. G. (2004). Basic and clinical biostatistics (4th ed.). New York: Lange Medical Books/McGraw-Hill.
Drummond, M. F., Sculpher, M. J., Torrance, G. W., O’Brien, B. J., & Stoddart, G. L. (2005). Methods for the economic evaluation of health care programmes (3rd ed.). Oxford: Oxford University Press.
Gapenski, L. C. (2009). Fundamentals of healthcare finance. Chicago: Health Administration Press.
Haber, J. R. (2008). Accounting demystified. New York: American Management Association/Amacom.
Online Business Dictionary. (n.d.). Fairfax, VA: WebFinance Inc. Retrieved from http://www.BusinessDictionary.com
Online Encyclopaedia. (n.d.). Boston: Cengage Learning. Retrieved from
http://www.encyclopedia.com
Ravindran, R. A. (Ed). (2008). Operations research and management science handbook. NewYork: CRC Press/Taylor & Francis Group.
Roberts, M. S. (2006). Economic aspects of evaluation. In C. P. Friedman & J. C. Wyatt (Eds.), Evaluation methods in biomedical informatics (2nd ed., pp. 301–337). New York: Springer.
Robinson, R. (1993). Economic evaluation and health care: What does it mean? British Medical Journal, 307(6905), 670–673.
Simoens, S. (2009). Health economic assessment: a methodological primer. International Journal of Environmental Research and Public Health,6(12), 2950–2966.
Sox, H., Blatt, M. A., Higgins, M. C., & Marton, K. T. (2006). Medical decision making (1st ed.). Boston: Butterworth-Heinemann.
EPUB
Manifold uses cookies
We use cookies to analyze our traffic. Please decide if you are willing to accept cookies from our website. You can change this setting anytime in Privacy Settings.