Skip to main content
“Chapter 11: Methods for Descriptive Studies” in “Handbook of eHealth Evaluation: An Evidence-based Approach”
Chapter 11
Methods for Descriptive Studies
Yulong Gu, Jim Warren
11.1 Introduction
Descriptive studies in eHealth evaluations aim to assess the success of eHealth
systems in terms of the system planning, design, implementation, use and
impact. Descriptive studies focus on describing the process and impact of
eHealth system development and implementation, which often are contextualized
within the implementation environment (e.g., a healthcare organization). The
descriptive nature of the evaluation design distinguishes descriptive studies
from comparative studies such as a before/after study or a randomized
controlled trial. In a 2003 literature review on evaluations of inpatient
clinical information systems by van der Meijden and colleagues, four types of
study design were identified: correlational, comparative, descriptive, and case
study (van der Meijden, Tange, Troost, & Hasman, 2003). This review inherited the distinction between objectivist and
subjectivist studies described by Friedman and Wyatt (1997); in the review, van
der Meijden and colleagues defined descriptive study as an objectivist study to
measure outcome variable(s) against predefined requirements, and case study as
an subjectivist study of a phenomenon in its natural context using data from
multiple sources — quantitatively or qualitatively (van der Meijden et al., 2003). For simplicity,
we include case study under the descriptive study category in this chapter, and
promote methodological components of qualitative, quantitative, and mixed
methods for designing eHealth evaluations in this category. Adopting this wider
scope, the following sections introduce the types of descriptive studies in
eHealth evaluations, address methodological considerations, and provide
examples of such studies.
11.2 Types of Descriptive Studies
There are five main types of descriptive studies undertaken in eHealth
evaluations. These are separated by the overall study design and the methods of
data collection and analysis, as well as by the objectives and assumptions of
the evaluation. The five types can be termed: qualitative studies, case
studies, usability studies, mixed methods studies, and other methods studies
(including ethnography, action research, and grounded theory studies).
11.2.1 Qualitative Studies
The methodological approach of qualitative studies for eHealth evaluations is
particularly appropriate when “we are interested in the ‘how’ or ‘why’ of processes and people using technology” (McKibbon, 2015). Qualitative study design can be used in both formative and
summative evaluations of eHealth interventions. The qualitative methods of data
collection and analysis include observation, documentation, interview, focus
group, and open-ended questionnaire. These methods help understand the
experiences of people using or planning on using eHealth solutions.
In qualitative studies, an interpretivist view is often adopted. This means
qualitative researchers start from the position that their knowledge of reality
is a social construction by human actors; their theories concerning reality are
ways of making sense of the world, and shared meanings are a form of
intersubjectivity rather than objectivity (Walsham, 2006). There is also
increasing uptake of critical theory and critical realism in qualitative health
evaluation research (McEvoy & Richards, 2003). The assumption for this paradigm is that reality exists
independent of the human mind regardless of whether it can be comprehended or
directly experienced (Levers, 2013). Irrespective of the different
epistemological assumptions, qualitative evaluations of eHealth interventions
apply similar data collection tools and analysis techniques to describe,
interpret, and challenge people’s perceptions and experiences with the environment where the intervention has
been implemented or is being planned for implementation.
11.2.2 Case Studies
A case study investigates a contemporary phenomenon within its real-life
context, especially when the boundaries between phenomenon and context are not
clearly evident (Yin, 2011). Case study methods are commonly used in social
sciences, and increasingly in information systems (IS) research since the 1980s, to produce meaningful results from a holistic
investigation into the complex and ubiquitous interactions among organizations,
technologies, and people (Dubé & Paré, 2003). The key decisions in designing a case study involve: (a) how to define
the case being studied; (b) how to determine the relevant data to be collected;
and (c) what should be done with the data once collected (Yin, 2011). These
decisions remain the crucial questions to ask when designing an eHealth
evaluation case study. In eHealth evaluations, the fundamental question
regarding the case definition is often answered based on consultation with a
range of eHealth project stakeholders. Investigations should also be undertaken
at an early stage in the case study design into the availability of qualitative
data sources — whether informants or documents — as well as the feasibility of collecting quantitative data. For instance,
eHealth systems often leave digital footprints in the form of system usage
patterns and user profiles which may help in assessing system uptake and
potentially in understanding system impact.
Case study design is versatile and flexible; it can be used with any
philosophical perspective (e.g., positivist, interpretivist, or critical); it
can also combine qualitative and quantitative data collection methods (Dubé & Paré, 2003). Case study research can involve a single case study or multiple case
studies; and can take the strategy of an explanatory, exploratory or
descriptive approach (Yin, 2011). The quality of eHealth evaluation case
studies relies on choosing appropriate study modes according to the purpose and
context of the evaluation. This context should also be described in detail in
the study reporting; this will assist with demonstrating the credibility and
generalizability of the research results (Benbasat, Goldstein, & Mead, 1987; Yin, 2011).
11.2.3 Usability Studies
Usability of an information system refers to the capacity of the system to allow
users to carry out their tasks safely, effectively, efficiently and enjoyably
(Kushniruk & Patel, 2004; Preece, Rogers, & Sharp, 2002; Preece et al., 1994). Kushniruk and Patel (2004) categorized the
usability studies that involve user representatives as usability testing
studies and the expert-based studies as usability inspection studies. They
highlighted heuristic evaluation (Nielsen & Molich, 1990) and cognitive walkthrough (Polson, Lewis, Rieman, & Wharton, 1992) as two useful expert-based usability inspection approaches.
Usability studies can evaluate an eHealth system in terms of both the design
and its implementation. The goals of usability evaluations include assessing
the extent of system functionality, the effect of interface on users, and
identifying specific problems. Usability testing should be considered in all
stages of the system design life cycle. The idea of testing early and often is
a valuable principle for having a good usable system (e.g., to get usability
evaluation results from early-stage prototypes including paper prototypes).
Another principle, although challenging for eHealth innovations, is to involve
users early and often — that is, to keep real users close to the design process. The interaction design
model (Cooper, 2004) recommends having at least one user as part of the design
team from the beginning, so that right from the formulation of the product its
concept actually makes sense to the type of users it’s aimed for; and the users themselves should participate in the usability
testing.
A classic usability study is done through user participation, either in a
laboratory setting or in the natural environment. There is also a suite of
techniques that are sometimes called “discount” usability testing or expert-based evaluation (as they are applied by usability
experts rather than end users). The most prominent expert-based approach is
heuristic evaluation (Nielsen & Molich, 1990). Whichever approach is taken for usability studies, the target
measures for usability are similar:
- How long is it taking users to do the task?
- How accurate are users in doing the task?
- How long does it take users to learn to do the task with the system?
- How well do users remember how to use the system from earlier sessions?
- And, in general, how happy are users about having worked the task with the tool?
A usability specification can combine these five measures into requirements,
such as: at least 90% of users can perform a given task correctly within no
more than five minutes one week after completing a 30-minute tutorial.
11.2.4 Mixed Methods Studies
Increasing uptake and recognition of mixed methods studies, which combines
qualitative and quantitative components in one research study, have been
observed in health sciences and health services research (Creswell, Klassen,
Plano, & Smith, 2011; Wisdom, Cavaleri, Onwuegbuzie, & Green, 2012). Mixed methods studies draw on the strength of utilizing multiple
methods, but have challenges inherent to the approach as well, such as how to
justify diverse philosophical positions and multiple theoretical frameworks,
and how to integrate multiple forms of data. A key element in reporting mixed
methods studies is to describe the study procedures in detail to inform readers
about the study quality.
Given the nature of eHealth innovations — often new, complex and hard to measure — a mixed methods design is particularly suitable for their evaluations to
collect robust evidence on not only their effectiveness, but also the real-life
contextual understandings of their implementation. For instance, the system
transactional data may indicate the technology uptake and usage pattern; and
end user interviews collect people’s insights into why they think certain events have happened and how to do things
better.
11.2.5 Other Methods (ethnography, action research, grounded theory)
In addition to the above four main categories of designs used in eHealth
evaluation studies, this section introduces a few other relevant and powerful
approaches, including ethnography, action research, and grounded theory
methods.
- With origins in anthropology, an ethnographic approach to information systems research aims to provide rich insights into the human, social and organizational aspects of systems development and application (Harvey & Myers, 1995). A distinguishing feature of ethnographic research is participant observation, that is, the researcher must have been there and “lived” there for reasonable length of time (Myers, 1997a). Interviews, surveys, and field notes can also be used in ethnography studies to collect data.
- Similarly, multiple data collection methods can be used in an action research study. The key feature of action research design is its “participatory, democratic process concerned with developing practical knowing” (Reason & Bradbury, 2001, p. 1). Action research studies naturally mix the problem-solving activities with research activities to produce knowledge (Chiasson, Germonprez, & Mathiassen, 2009), and often take an iterative process of planning, acting, observing, and reflecting (McNiff & Whitehead, 2002).
- Grounded theory is defined as an inductive methodology to generate theories through a rigorous research process leading to the emergence of conceptual categories; and these concepts as categories are related to each other as a theoretical explanation of the actions that continually resolve the main concern of the participants in a substantive area (Glaser & Strauss, 1967; Rhine, 2008). In the field of information systems research, grounded theory methodology is useful for developing context-based, process-oriented descriptions and explanations of the phenomena (Myers, 1997b). A 2013 review found that the most common use of grounded theory in Information Systems studies is the application of grounded theory techniques, typically for data analysis purposes (Matavire & Brown, 2013).
It is worth noting that the use of the above methods does not exclude other
designs. For instance, ethnographic observations can be undertaken as one
element in a mixed methods case study (Greenhalgh, Hinder, Stramer, Bratan, & Russell, 2010).
11.3 Methodological Considerations
There are a range of methodological issues that need to be considered when
designing, undertaking and reporting a descriptive eHealth evaluation. These
issues may emerge throughout the study procedures, from defining study
objectives to presenting data interpretation. This section provides a quick
guide for addressing the most critical issues in order to choose and describe
an appropriate approach in your study.
11.3.1 Study Objectives and Questions
The high-level goals of an eHealth evaluation study are often planned in the
initial phase of the study. The goals define what the study is meant to reveal
and what is to be learned. These may be documented as a multilevel statement of
high-level intentions or questions. This statement is then expanded in the
methodology section of the final study report with specific aspects of the
purpose of the evaluation: that is, things you want to find out. For instance,
if the innovation were an electronic referral (e-referral) system:
- The acceptance of e-referrals by all impacted healthcare workers.
- The impact of e-referrals on safety, efficiency and timeliness of healthcare delivery.
- The key problems and issues emerging from a technical and management perspective in implementation of e-referrals.
Some of the above specific statements may be expressed as testable hypotheses;
for example, “Use of e-referrals is widely accepted by General Practitioners (GPs).” A good use of expanded objectives is to state specific research questions; for
example, we might ask, “Do GPs prefer e-referrals to hard copy referrals?” as part of the “acceptance” assessment objective above.
11.3.2 Observable and Contextual Variables
In many cases, eHealth evaluation will be linked to (as part of, or coming
after) a health IS implementation project that had a business case based on specific expected
benefits of the technology, and specific functional and non-functional
requirements as critical success factors of the project. These should be part
of the evaluation’s benefits framework. International literature (e.g., the benefits found with
similar technology when evaluated overseas) may also inform the framework. The
establishment of benefits framework in an eHealth evaluation will dictate the
study design and variables selection, as well as the methods of data collection
and analysis. For instance, observable variables to measure system outcome may
include: mortality, morbidity, readmission, length of stay, patient functional
status or quality of health/life.
One of the strengths of descriptive studies is that the study findings are
contextualized within the system implementation environment. Hence, it is a
good practice to explain in the methodology what system(s) is evaluated,
including the technologies introduced, years and geography of implementation
and use, as well as the healthcare delivery organizations and user groups
involved in their use. Contextual variables also include those detailing the
evaluation parameters such as research study period and those contextual
conditions that are relevant to the system implementation success or failure,
for example, organizational structure and funding model.
11.3.3 Credibility, Authenticity and Contextualization
The philosophy of evaluation that is taken along with the detailed research
procedures should be described to demonstrate the study rigour, reliability,
validity and credibility. The methods used should also be detailed (e.g.,
interviews of particular user or management groups, analysis of particular data
files, statistical procedures, etc.). Data triangulation (examining the
consistency of different data sources) is a common technique to enhance the
research quality. Where any particularly novel methods are used, they should be
explained with reference to academic literature and/or particular projects from
which they have arisen; ideally, they should be justified with comparison to
other methods that suit similar purposes.
Authenticity is regarded as a feature particular to naturalistic inquiry (and
ethnographic naturalism), an approach to inquiry that aims to generate a
genuine or true understanding of people’s experiences (Schwandt, 2007). In a wider sense of descriptive eHealth
evaluation studies, it is important to maintain research authenticity — to convey a genuine understanding of the project stakeholders’ experiences from their own point of view.
Related to the above discussion on credibility and authenticity, the goal of
contextualizing study findings is to support the final theory by seeing whether
“the meaning system and rules of behaviour make sense to those being studied” (Neuman, 2003). For example, to draw a “rich picture” of the impact of the evaluated eHealth implementation, the study may inquire
and report on “How has it impacted the social context (e.g., communications, perceived roles
and responsibilities, and how the users feel about themselves and others)?”
11.3.4 Theoretical Sampling and Saturation
Theoretical sampling is an important tool in grounded theory studies. It is to
decide, on analytic grounds, what data to collect next and where to find them
(Glaser & Strauss, 1967). This requires calculation and imagination from the analyst in
order to move the theory along quickly and efficiently. The basic criterion is
to govern the selection of comparison groups for discovering theory based on
their theoretical relevance for furthering the development of emerging
categories (Glaser & Strauss, 1967).
In studies that collect data via interviews, ideally the interviewing should
continue, extending with further theoretical sampling, until the evaluators
have reached “saturation” — the point where all the relevant contributions from new interviewees neatly fit
categories identified from earlier interviews. Often time and budget do not
allow full saturation, in which cases the key topics of interest and major data
themes need to be confirmed, for example, by repeating emphasis from
individuals in similar roles.
11.3.5 Data Collection and Analysis
Descriptive studies may use a range of diverse and flexible methods in data
collection and analysis. Detailed description of the data collection methods
used will help readers understand exactly how the study achieves the
measurements that are relevant to your approach and measurement criteria. This
includes how interviewees are identified, and sources of documents and
electronic data, as well as pre-planned interview questions and questionnaires.
In terms of describing quantitative data analysis methods, all statistical
procedures associated with the production of quantitative results need to be
stated. Similarly, all analysis protocols for qualitative data should be
clarified (e.g., the data coding methods used).
11.3.6 Interpretation and Dissemination
Key findings from descriptive studies should provide answers to the research
objectives/questions. In general, these findings can be tabulated against the
benefits framework you introduced as part of the methodology. Interpretation of
the findings may characterize how the eHealth intervention enabled a
transformation in healthcare practices. Moreover, when explaining the
interpretation and implications drawn from the evaluation results, the key
implications can be organized into formal recommendations.
In terms of evaluation dissemination, the study findings should reach all
stakeholders considering uptake of similar technology. Evaluation and
dissemination as iterative cycles should be considered. Feedback from
dissemination of interim findings is a valuable component of the evaluation per
se. A dissemination strategy should be planned, specifying the dissemination
time frame and pathways (e.g., conventional written reporting, face-to-face
reporting, Web 2.0, commercial media and academic publications).
11.4 Exemplary Cases
This section illustrates two descriptive eHealth evaluation studies, one case
study as part of the commissioned evaluation on the implementation and impact
of the summary care record (SCR) and HealthSpace programmes in the United Kingdom, and the other study from
Canada as a usability evaluation to inform Alberta’s personal health record (PHR) design. These two examples demonstrate how to design a descriptive study
applying a range of data collection and analysis methods to achieve the
evaluation objectives.
11.4.1 United Kingdom HealthSpace Case Study
Between 2007 and 2010, an independent evaluation was commissioned by the U.K. Department of Health to evaluate the implementation and impact of the summary
care record (SCR) and HealthSpace programmes (Greenhalgh, Stramer et al., 2010; Greenhalgh,
Hinder et al., 2010). SCR was an electronic summary of key health data drawn from a patient’s GP-held electronic record and accessible over a secure Internet connection by
authorized healthcare staff. HealthSpace was an Internet-accessible personal
organizer onto which people may enter health data and plan health appointments.
Through an advanced HealthSpace account, they could gain secure access to their
SCR and e-mail their GP using a function called Communicator.
This evaluation undertook a mixed methods approach using a range of data sources
and collection methods to “capture as rich a picture of the programme as possible from as many angles as
possible” (Greenhalgh, Hinder et al., 2010). The evaluation fieldwork involved seven
interrelated empirical studies, including a multilevel case study of
HealthSpace covering the policy-making process, implementation by the English
National Health Service (NHS) organizations, and experiences of patients and carers. In the case study,
evaluators reviewed the national registration statistics on the HealthSpace
uptake rate (using the number of basic and advanced HealthSpace accounts
created). They also studied the adoption and non-adoption of HealthSpace by 56
patients and carers using observation and interview methods. In addition, they
interviewed 160 staff in national and local organizations, and collected 3,000
pages of documents to build a picture of the programme in context. As part of
the patient study, ethnographic observation was undertaken by a researcher who
shadowed 20 participants for two or three periods of two to five hours each at
home and work, and noted information needs as they arose and how these were
tackled by the participant. An in-depth picture of HealthSpace conception,
design, implementation, utilization (or non-use and abandonment, in most cases)
and impact was constructed from this mixed methods approach that included both
quantitative uptake statistics and qualitative analysis of the field notes,
interview transcripts, documents and communication records.
The case study showed that the HealthSpace personal electronic health record was
poorly taken up by people in England, and it was perceived as neither useful
nor easy to use. The study also made several recommendations for future
development of similar technologies, including the suggestion to conceptualize
them as components of a sociotechnical network and to apply user-centred design
principles more explicitly. The overall evaluation of the SCR and HealthSpace recognized the scale and complexity of both programmes and observed that “greatest progress appeared to be made when key stakeholders came together in
uneasy dialogue, speaking each other’s languages imperfectly and trying to understand where others were coming from,
even when the hoped-for consensus never materialised” (Greenhalgh, Hinder et al., 2010).
11.4.2 Usability Evaluation to Inform Alberta’s PHR Design
The Alberta PHR was a key component in the online consumer health application, the Personal
Health Portal (PHP), deployed in the Province of Alberta, Canada. The PHR usability evaluation (Price, Bellwood, & Davies, 2015) was part of the overall PHP benefit evaluation that was embedded into the life cycle of the PHP program throughout the predesign, design and adoption phases. Although using a
commercial PHR product, its usability evaluation aimed to assess the early design of the PHR software and to provide constructive feedback and recommendations to the PHR project team in a timely way so as to improve the PHR software prior to its launch.
Between June 2012 and April 2013, a combination of usability inspection
(applying heuristic inspection and persona-based inspection methods) and
usability testing (with 21 representative end users) was used in Alberta’s PHR evaluation. For the persona-based inspection, two patient personas were
developed; for each persona, scenarios were developed to illustrate expected
use of the PHR. Then in the user testing protocol, participants were asked to “think aloud” while performing two sets of actions: (a) to explore the PHR freely, and (b) to follow specific scenarios matching the expected activities
of the targeted end users that covered all key PHR tasks. Findings from the usability inspection and testing were largely
consistent and were used to generate several recommendations regarding the PHR information architecture, content and presentation. For instance, the usability
inspection identified that the PHR had a deep navigation hierarchy with several layers of screens before patient
health data became available. This was also confirmed in usability testing when
users sometimes found the module segmentation confusing. Accordingly, the
evaluation researchers have recommended revising the structure and organization
of the modules with clearer top-level navigation, a combination of
content-oriented tabs and user-specific tabs, and a “home” tab providing a clear clinical summary.
Usability evaluation can be conducted at several stages in the development life
cycle of eHealth systems to improve the design — from the earliest mock-ups (ideally starting with paper prototypes), on
partially completed systems, or once the system is installed and undergoing
maintenance. The Alberta PHR study represents an exemplary case of usability evaluations to inform the
development of a government-sponsored PHR project. It demonstrates the feasibility and value of early usability evaluation
in eHealth projects for having a good usable system, in this case avoiding
usability problems prior to rollout.
11.5 Summary
Descriptive evaluation studies describe the process and impact of the
development and implementation of a system. The findings are often
contextualized within the implementation environment, such as — for our purposes — the specific healthcare organization. Descriptive evaluations utilize a variety
of qualitative and quantitative data collection and analysis methods; and the
study design can apply a range of assumptions, from positivist or
interpretivist perspectives, to critical theory and critical realism. These
studies are used in both formative evaluations and summative evaluations.
References
Benbasat, I., Goldstein, D. K., & Mead, M. (1987). The case research strategy in studies of information-systems. Management Information Systems Quarterly, 11(3), 369–386. doi: 10.2307/248684
Chiasson, M., Germonprez, M., & Mathiassen, L. (2009). Pluralist action research: a review of the information
systems literature. Information Systems Journal, 19(1), 31–54. doi: 10.1111/j.1365-2575.2008.00297.x
Cooper, A. (2004). The inmates are running the asylum: Why high-tech products drive us crazy and
how to restore the sanity. Carmel, CA: Sams Publishing.
Creswell, J. W., Klassen, A. C., Plano Clark, V. L., & Smith, K. C. (2011, August). Best practices for mixed methods research in the health sciences. Bethesda, MD: Office of Behavioral and Social Sciences Research, National Institutes of
Health. Retrieved from http://obssr.od.nih.gov/mixed_methods_research
Dubé, L., & Paré, G. (2003). Rigor in information systems positivist case research: Current
practices, trends, and recommendations. Management Information Systems Quarterly, 27(4), 597–635.
Friedman, C., & Wyatt, J. (1997). Evaluation methods in medical informatics. New York: Springer-Verlag.
Glaser, B. G., & Strauss, A. L. (1967). The discovery of grounded theory: Strategies for qualitative research. Chicago: Aldine Pub. Co.
Greenhalgh, T., Hinder, S., Stramer, K., Bratan, T., & Russell, J. (2010). Adoption, non-adoption, and abandonment of a personal
electronic health record: case study of HealthSpace. British Medical Journal, 341(7782), c5814. doi: 10.1136/bmj.c5814.
Greenhalgh, T., Stramer, K., Bratan, T., Byrne, E., Russell, J., Hinder, S., & Potts, H. (2010, May). The devil’s in the detail. Final report of the independent evaluation of the Summary Care
Record and HealthSpace programmes. London: University College London. Retrieved from
https://www.ucl.ac.uk/news/scriefullreport.pdf
Harvey, L., & Myers, M. D. (1995). Scholarship and practice: the contribution of ethnographic
research methods to bridging the gap. Information Technology & People, 8(3), 13–27.
Kushniruk, A. W., & Patel, V. L. (2004). Cognitive and usability engineering methods for the
evaluation of clinical information systems. Journal of Biomedical Informatics, 37(1), 56–76. doi: 10.1016/j.jbi.2004.01.003
Levers, M.-J. D. (2013). Philosophical paradigms, grounded theory, and perspectives on emergence. SAGE Open Journals (October-December). doi: 10.1177/2158244013517243
Matavire, R., & Brown, I. (2013). Profiling grounded theory approaches in information systems
research [dagger]. European Journal of Information Systems, 22(1), 119–129.
McEvoy, P., & Richards, D. (2003). Critical realism: a way forward for evaluation research in
nursing? Journal of Advanced Nursing, 43(4), 411–420. doi: 10.1046/j.1365-2648.2003.02730.x
McKibbon, A. (2015). eHealth evaluation: Introduction to qualitative methods. Waterloo, ON: National Institutes of Health Informatics, Canada. Retrieved from
http://www.nihi.ca/index.php?MenuItemID=415
McNiff, J., & Whitehead, J. (2002). Action research: Principles and practice (2nd ed.). London: Routledge.
Myers, M. D. (1997a). ICIS Panel 1995: Judging qualitative research in information systems: Criteria for
accepting and rejecting manuscripts. Criteria and conventions used for judging
manuscripts in the area of ethnography. Retrieved from
http://www.misq.org/skin/frontend/default/misq/MISQD_isworld/iciseth.htm
Myers, M. D. (1997b). Qualitative research in information systems. Management Information Systems Quarterly, 21(2), 241–242. doi: 10.2307/249422
Neuman, W. L. (2003). Social research methods: Qualitative and quantitative approach (5th ed.). Boston: Pearson Education, Inc.
Nielsen, J., & Molich, R. (1990). Heuristic evaluation of user interfaces. Paper presented at the Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, April 1 to 5, Seattle,
Washington, U.S.A.
Polson, P. G., Lewis, C., Rieman, J., & Wharton, C. (1992). Cognitive walkthroughs: A method for theory-based
evaluation of user interfaces. International Journal of Man-Machine Studies, 36(5), 741–773. doi: 10.1016/0020-7373(92)90039-N
Preece, J., Rogers, Y., & Sharp, H. (2002). Interaction design: beyond human-computer interaction. New York: Wiley.
Preece, J., Rogers, Y., Sharp, H., Benyon, D., Holland, S., & Carey, T. (1994). Human-computer interaction. New York: Addison-Wesley Publishing Company.
Price, M., Bellwood, P., & Davies, I. (2015). Using usability evaluation to inform Alberta’s personal health record design. Studies in Health Technology and Informatics, 208, 314–318.
Reason, P., & Bradbury, H. (Eds.). (2001). Handbook of action research: Participative inquiry and practice (1st ed.). London: SAGE Publications.
Rhine, J. (2008, Wednesday, July 23). The Grounded Theory Institute: The
official site of Dr. Barney Glaser and classic grounded theory. Retrieved from
http://www.groundedtheory.com/
Schwandt, T. A. (2007). The SAGE dictionary of qualitative inquiry (3rd ed.). Thousand Oaks, CA: SAGE Publications.
van der Meijden, M. J., Tange, H. J., Troost, J., & Hasman, A. (2003). Determinants of success of inpatient clinical information
systems: A literature review. Journal of American Medical Informatics Association, 10(3), 235–243. doi: 10.1197/jamia.M1094
Walsham, G. (2006). Doing interpretive research. European Journal of Information Systems, 15(3), 320–330.
Wisdom, J. P., Cavaleri, M. A., Onwuegbuzie, A. J., & Green, C. A. (2012). Methodological reporting in qualitative, quantitative, and
mixed methods health services research articles. Health Services Research, 47(2), 721–745. doi: 10.1111/j.1475-6773.2011.01344.x
Yin, R. K. (2011). Case study research: Design and methods (Vol. 5). Thousand Oaks, CA: SAGE Publications.
EPUB
Manifold uses cookies
We use cookies to analyze our traffic. Please decide if you are willing to accept cookies from our website. You can change this setting anytime in Privacy Settings.