• Subject List
  • Take a Tour
  • For Authors
  • Subscriber Services
  • Publications
  • African American Studies
  • African Studies
  • American Literature
  • Anthropology
  • Architecture Planning and Preservation
  • Art History
  • Atlantic History
  • Biblical Studies
  • British and Irish Literature
  • Childhood Studies
  • Chinese Studies
  • Cinema and Media Studies
  • Communication
  • Criminology
  • Environmental Science
  • Evolutionary Biology
  • International Law
  • International Relations
  • Islamic Studies
  • Jewish Studies
  • Latin American Studies
  • Latino Studies
  • Linguistics
  • Literary and Critical Theory
  • Medieval Studies
  • Military History
  • Political Science
  • Public Health
  • Renaissance and Reformation
  • Social Work
  • Urban Studies
  • Victorian Literature
  • Browse All Subjects

How to Subscribe

  • Free Trials

In This Article Expand or collapse the "in this article" section Qualitative Comparative Analysis (QCA)

Introduction.

  • The Emergence of QCA
  • Comparisons with Other Techniques
  • Criticisms of QCA
  • Case Selection and Combining Cross-Case and Within-Case Analysis
  • Model Specification and Parameters of Fit
  • Applications of QCA
  • QCA Software

Related Articles Expand or collapse the "related articles" section about

About related articles close popup.

Lorem Ipsum Sit Dolor Amet

Vestibulum ante ipsum primis in faucibus orci luctus et ultrices posuere cubilia Curae; Aliquam ligula odio, euismod ut aliquam et, vestibulum nec risus. Nulla viverra, arcu et iaculis consequat, justo diam ornare tellus, semper ultrices tellus nunc eu tellus.

  • Qualitative Methods in Sociological Research
  • Quantitative Methods in Sociological Research

Other Subject Areas

Forthcoming articles expand or collapse the "forthcoming articles" section.

  • Consumer Credit and Debt
  • LGBTQ+ Spaces
  • Street Gangs
  • Find more forthcoming articles...
  • Export Citations
  • Share This Facebook LinkedIn Twitter

Qualitative Comparative Analysis (QCA) by Axel Marx LAST REVIEWED: 13 November 2018 LAST MODIFIED: 28 November 2016 DOI: 10.1093/obo/9780199756384-0188

The social sciences use a wide range of research methods and techniques ranging from experiments to techniques which analyze observational data such as statistical techniques, qualitative text analytic techniques, ethnographies, and many others. In the 1980s a new technique emerged, named Qualitative Comparative Analysis (QCA), which aimed to provide a formalized way to systematically compare a small number (5<N<75) of case studies. John Gerring in the 2001 version of his introduction to social sciences identified QCA as one of the only genuine methodological innovations of the last few decades. In recent years, QCA has also been applied to large-N studies ( Glaesser 2015 , cited under Applications of QCA ; Ragin 2008 , cited under The Essential Features of QCA ) and the application of QCA to perform large-N analysis is in full development. This annotated bibliography aims to provide an overview of the main contributions of QCA as a research technique as well as an introduction to some specific issues as well as QCA applications. The contribution starts with sketching the emergence of QCA and situating the method in the debate between “qualitative” and “quantitative” methods. This contextualization is important to understand and appreciate that QCA in essence is a qualitative case-based research technique and not a quantitative variable-oriented technique. Next, the article discusses some key features of QCA and identifies some of the main books and handbooks on QCA as well as some of the criticism. In a third section, the overview focuses attention on the importance of cases and case selection in QCA. The fourth section introduces the way in which QCA builds explanatory models and presents the key contributions on the selection of explanatory factors, model specification, and testing. The fifth section canvasses the applications of QCA in the social sciences and identifies some interesting examples. Finally, since QCA is a formalized data-analytic technique based on algorithms, the overview ends with an overview of the main software package which can assist in the application of QCA.

Qualitative Case-Based Research in the Social Sciences

This section grounds Qualitative Comparative Analysis (QCA) in the tradition of qualitative case-based methods. As a research approach QCA mainly focuses on the systematic comparison of cases in order to find patterns of difference and similarity between cases. The initial intention of Ragin 1987 (cited under The Essential Features of QCA ) was to develop an original “synthetic strategy” as a middle way between the case-oriented (or “qualitative”) and the variable-oriented (or “quantitative”) approaches, which would “integrate the best features of the case-oriented approach with the best features of the variable-oriented approach” ( Ragin 1987 , p. 84). However, instead of grounding qualitative research on the premises of quantitative research such as King, et al. 1994 did, Ragin aimed to develop a method which is firmly rooted on a case-based qualitative approach ( Ragin and Becker 1992 ; Ragin 1997 for a systematic discussion of the differences between QCA and the approach advocated by King, et al. 1994 ). In recent years the fundamental differences between case-based and variable-oriented approaches have been further elaborated in terms of selection of units of observation or cases, approaches to explanation, causal analysis, measurement of concepts, and external validity (scope and generalization). Many researchers including Charles Ragin, Andrew Bennett ( George and Bennett 2005 ), John Gerring ( Gerring 2007 , Gerring 2012 ), David Collier ( Brady and Collier 2004 ) and James Mahoney ( Mahoney and Rueschemeyer 2003 ) have contributed significantly to identifying the key ontological, epistemological, and logical differences between the two approaches. Goertz and Mahoney 2012 brings this literature together and shows the distinct differences between quantitative and qualitative research. The authors refer to two “cultures” of conducting social-scientific research. In this distinction QCA falls firmly in the “camp” of qualitative research. The overview below identifies some key texts which discuss these differences more in depth.

Brady, H., and D. Collier, eds. 2004. Rethinking social inquiry: Diverse tools, shared standards . Lanham, MD: Rowman and Littlefield.

This edited volume goes into a detailed discussion with King, et al. 1994 and shows the distinctive strengths of different approaches with a strong emphasis on the distinctive strengths of qualitative case-based methods. Book also introduces the idea of process-tracing for within-case analysis. Reprint 2010.

George, A., and A. Bennett. 2005. Case research and theory development . Cambridge, MA: MIT.

Very extensive treatment of how case-based research focusing on longitudinal analysis and process-tracing can contribute to both theory development and theory testing. Discusses many examples from empirical political science research.

Gerring, J. 2007. Case study research: Principles and practice . Cambridge, UK: Cambridge Univ. Press.

Very good introduction into what a case study is and what analytic and descriptive purposes it serves in social science research.

Gerring, J. 2012. Social science methodology: A unified framework . Cambridge, UK: Cambridge Univ. Press.

An update of the 2001 volume which provides a concise introduction to different research approaches and techniques in the social sciences. Clearly shows the added value of different approaches and aims to overcome “the one versus the other” approaches.

Goertz, G., and J. Mahoney. 2012. A tale of two cultures: Qualitative and quantitative research in the social sciences . Princeton, NJ: Princeton Univ. Press.

Book elaborates the differences between qualitative and quantitative research. They elaborate these differences in terms of (1) approaches to explanation, (2) conceptions of causation, (3) approaches toward multivariate explanations, (4) equifinality, (5) scope and causal generalization, (6) case selection, (7) weighting observations, (8) substantively important cases, (9) lack of fit, and (10) concepts and measurement.

King, G., R. Keohane, and S. Verba. 1994. Designing social enquiry: Scientific inference in qualitative research . Princeton, NJ: Princeton Univ. Press.

A much-quoted and highly influential book on research design for the social sciences. This book aimed to discuss and assess qualitative research and argued that qualitative research should be benchmarked against standards used in quantitative research such as never select cases on the dependent variables, making sure one has always more observations than variables, maximize variation, and so on.

Mahoney, J., and D. Rueschemeyer, eds. 2003. Comparative historical analysis in the social sciences . Cambridge, UK: Cambridge Univ. Press.

This is a very impressive volume with chapters written by the best researchers in macro-sociological research and comparative politics. It shows the key strengths of comparative historical research for explaining key social phenomena such as revolutions, social provisions, and democracy. In addition it combines masterfully substantive discussions with methodological implications and challenges and in this way shows how case-based research contributes fundamentally to understanding social change.

Poteete, A., M. Janssen, and E. Ostrom. 2010. Working together: Collective action, the commons and multiple methods in practice . Princeton, NJ: Princeton Univ. Press.

The study of Common Pool Resources (CPRs) has been one of the most theoretically advanced subjects in social sciences. This excellent book introduces different research designs to analyze questions related to the governance of CPRs and situates QCA nicely in the universe of different research designs and strategies.

Ragin, C. C. 1997. Turning the tables: How case-oriented methods challenge variable-oriented methods. Comparative Social Research 16:27–42.

Engages directly with the work of King, et al. 1994 and fundamentally disagrees with its authors Ragin argues that qualitative case-based research is based on different standards and that this type of research should be assessed on the basis of these standards.

Ragin, C. C., and H. Becker. 1992. What is a case? Exploring the foundations of social inquiry . Cambridge, UK: Cambridge Univ. Press.

Brings together leading researchers to discuss the deceptively easy question “what is a case?” and shows the many different approaches toward case-study research. One red line going through the contributions is the emphasis on thinking hard about the question “what is my case a case of?” in theoretical terms.

back to top

Users without a subscription are not able to see the full content on this page. Please subscribe or login .

Oxford Bibliographies Online is available by subscription and perpetual access to institutions. For more information or to contact an Oxford Sales Representative click here .

  • About Sociology »
  • Meet the Editorial Board »
  • Actor-Network Theory
  • Adolescence
  • African Americans
  • African Societies
  • Agent-Based Modeling
  • Analysis, Spatial
  • Analysis, World-Systems
  • Anomie and Strain Theory
  • Arab Spring, Mobilization, and Contentious Politics in the...
  • Asian Americans
  • Assimilation
  • Authority and Work
  • Bell, Daniel
  • Biosociology
  • Bourdieu, Pierre
  • Catholicism
  • Causal Inference
  • Chicago School of Sociology
  • Chinese Cultural Revolution
  • Chinese Society
  • Citizenship
  • Civil Rights
  • Civil Society
  • Cognitive Sociology
  • Cohort Analysis
  • Collective Efficacy
  • Collective Memory
  • Comparative Historical Sociology
  • Comte, Auguste
  • Conflict Theory
  • Conservatism
  • Consumer Culture
  • Consumption
  • Contemporary Family Issues
  • Contingent Work
  • Conversation Analysis
  • Corrections
  • Cosmopolitanism
  • Crime, Cities and
  • Cultural Capital
  • Cultural Classification and Codes
  • Cultural Omnivorousness
  • Cultural Production and Circulation
  • Culture and Networks
  • Culture, Sociology of
  • Development
  • Discrimination
  • Doing Gender
  • Du Bois, W.E.B.
  • Durkheim, Émile
  • Economic Institutions and Institutional Change
  • Economic Sociology
  • Education and Health
  • Education Policy in the United States
  • Educational Policy and Race
  • Empires and Colonialism
  • Entrepreneurship
  • Environmental Sociology
  • Epistemology
  • Ethnic Enclaves
  • Ethnomethodology and Conversation Analysis
  • Exchange Theory
  • Families, Postmodern
  • Family Policies
  • Feminist Theory
  • Field, Bourdieu's Concept of
  • Forced Migration
  • Foucault, Michel
  • Frankfurt School
  • Gender and Bodies
  • Gender and Crime
  • Gender and Education
  • Gender and Health
  • Gender and Incarceration
  • Gender and Professions
  • Gender and Social Movements
  • Gender and Work
  • Gender Pay Gap
  • Gender, Sexuality, and Migration
  • Gender Stratification
  • Gender, Welfare Policy and
  • Gendered Sexuality
  • Gentrification
  • Gerontology
  • Globalization and Labor
  • Goffman, Erving
  • Historic Preservation
  • Human Trafficking
  • Immigration
  • Indian Society, Contemporary
  • Institutions
  • Intellectuals
  • Intersectionalities
  • Interview Methodology
  • Job Quality
  • Knowledge, Critical Sociology of
  • Labor Markets
  • Latino/Latina Studies
  • Law and Society
  • Law, Sociology of
  • LGBT Parenting and Family Formation
  • LGBT Social Movements
  • Life Course
  • Lipset, S.M.
  • Markets, Conventions and Categories in
  • Marriage and Divorce
  • Marxist Sociology
  • Masculinity
  • Mass Incarceration in the United States and its Collateral...
  • Material Culture
  • Mathematical Sociology
  • Medical Sociology
  • Mental Illness
  • Methodological Individualism
  • Middle Classes
  • Military Sociology
  • Money and Credit
  • Multiculturalism
  • Multilevel Models
  • Multiracial, Mixed-Race, and Biracial Identities
  • Nationalism
  • Non-normative Sexuality Studies
  • Occupations and Professions
  • Organizations
  • Panel Studies
  • Parsons, Talcott
  • Political Culture
  • Political Economy
  • Political Sociology
  • Popular Culture
  • Proletariat (Working Class)
  • Protestantism
  • Public Opinion
  • Public Space
  • Qualitative Comparative Analysis (QCA)
  • Race and Sexuality
  • Race and Violence
  • Race and Youth
  • Race in Global Perspective
  • Race, Organizations, and Movements
  • Rational Choice
  • Relationships
  • Religion and the Public Sphere
  • Residential Segregation
  • Revolutions
  • Role Theory
  • Rural Sociology
  • Scientific Networks
  • Secularization
  • Sequence Analysis
  • Sex versus Gender
  • Sexual Identity
  • Sexualities
  • Sexuality Across the Life Course
  • Simmel, Georg
  • Single Parents in Context
  • Small Cities
  • Social Capital
  • Social Change
  • Social Closure
  • Social Construction of Crime
  • Social Control
  • Social Darwinism
  • Social Disorganization Theory
  • Social Epidemiology
  • Social History
  • Social Indicators
  • Social Mobility
  • Social Movements
  • Social Network Analysis
  • Social Networks
  • Social Policy
  • Social Problems
  • Social Psychology
  • Social Stratification
  • Social Theory
  • Socialization, Sociological Perspectives on
  • Sociolinguistics
  • Sociological Approaches to Character
  • Sociological Research on the Chinese Society
  • Sociological Research, Qualitative Methods in
  • Sociological Research, Quantitative Methods in
  • Sociology, History of
  • Sociology of Manners
  • Sociology of Music
  • Sociology of War, The
  • Suburbanism
  • Survey Methods
  • Symbolic Boundaries
  • Symbolic Interactionism
  • The Division of Labor after Durkheim
  • Tilly, Charles
  • Time Use and Childcare
  • Time Use and Time Diary Research
  • Tourism, Sociology of
  • Transnational Adoption
  • Unions and Inequality
  • Urban Ethnography
  • Urban Growth Machine
  • Urban Inequality in the United States
  • Veblen, Thorstein
  • Visual Arts, Music, and Aesthetic Experience
  • Wallerstein, Immanuel
  • Welfare, Race, and the American Imagination
  • Welfare States
  • Women’s Employment and Economic Inequality Between Househo...
  • Work and Employment, Sociology of
  • Work/Life Balance
  • Workplace Flexibility
  • Privacy Policy
  • Cookie Policy
  • Legal Notice
  • Accessibility

Powered by:

  • [66.249.64.20|81.177.182.154]
  • 81.177.182.154

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings
  • Advanced Search
  • Journal List
  • HHS Author Manuscripts

Logo of nihpa

Qualitative Comparative Analysis: A Hybrid Method for Identifying Factors Associated with Program Effectiveness

Deborah cragun.

1 Moffitt Cancer Center, Tampa, FL, USA

Susan T. Vadaparampil

Julie baldwin.

2 University of South Florida, Tampa, FL, USA

Heather Hampel

3 Ohio State University, Columbus, OH, USA

Rita D. DeBate

Associated data.

Qualitative comparative analysis (QCA) was developed over 25 years ago to bridge the qualitative and quantitative research gap. Upon searching PubMed and the Journal of Mixed Methods Research , this review identified 30 original research studies that utilized QCA. Perceptions that QCA is complex and provides few relative advantages over other methods may be limiting QCA adoption. Thus, to overcome these perceptions, this article demonstrates how to perform QCA using data from fifteen institutions that implemented universal tumor screening (UTS) programs to identify patients at high risk for hereditary colorectal cancer. In this example, QCA revealed a combination of conditions unique to effective UTS programs. Results informed additional research and provided a model for improving patient follow-through after a positive screen.

Use of what is still sometimes dichotomized into qualitative and quantitative research methods in complimentary or comparative ways has become widely accepted in several social science disciplines ( Bazeley, 2009 ). In contrast, the extent to which various disciplines accept and utilize approaches that fuse or blend qualitative and quantitative methods is less clear. Qualitative comparative analysis (QCA) is a hybrid method designed to bridge the qualitative (case-oriented) and quantitative (variable-oriented) research gap and to serve as a practical approach for understanding complex, real-world situations ( Ragin, 1987 ; Benoît Rihoux & Marx, 2013 ). QCA was initially developed by Dr. Charles Ragin for use in small- or medium- N case study research ( Ragin, 1987 ). QCA combines Boolean algebra and minimization algorithms to systematically compare cases and derive solutions consisting of one or more patterns of conditions that when present or absent are uniquely associated with the presence or absence of an outcome ( Ragin, 1987 ). QCA therefore takes a set-theoretic approach originating from the idea that attributes of cases are often best evaluated in a holistic fashion using set relations ( Ragin, 1987 ; Benoît Rihoux & Marx, 2013 ). In QCA, set membership is assigned based on whether or to what degree a case satisfies criteria for each outcome or condition. When QCA was originally developed, conditions and outcomes were dichotomized as either present or absent and cases were classified according to whether they belong in each set. This original technique is now typically referred to as crisp-set QCA (csQCA) in order to distinguish it from related techniques that were later developed ( Benoît Rihoux & Marx, 2013 ). Other QCA techniques include multi-value QCA (mvQCA) which allows outcomes to have more than two values and fuzzy-set QCA (fsQCA) which allows for wide variation in the extent to which cases satisfy set membership criteria for each outcome and condition ( Benoît Rihoux & Marx, 2013 ). Software programs are available to assist in performing QCA; one of these was developed by Charles Ragin and is freely available for download online at http://www.fsqca.com along with a user manual (Ragin et. al, 2006).

Some criticisms of QCA are based on its perceived complexity or lack of identified advantage over other methods ( Hawley, 2007 ). Admittedly, more traditional qualitative approaches to performing multiple cross-case comparisons exist ( Miles & Huberman, 1994 ). However, as the number of cases increases, systematic comparisons may not be logistically feasible without using QCA software. Another advantage of QCA stems from its mathematical approach to identify solutions and assess their overall merit, a quality valued by journals that publish primarily “quantitative” research.

The versatility of QCA is evidenced through its use in conjunction with various types of research designs ( Kahwati et al., 2011 ; Shanahan, Vaisey, Erickson, & Smolen, 2008 ; Weiner, Jacobs, Minasian, & Good, 2012 ). QCA can be used to analyze individual-level, institution-level, or country-level data from studies with small, medium, and large sample sizes. Furthermore, both unstructured data (e.g., interview transcripts) and structured data (e.g., responses to closed-ended survey questions) can be used to perform QCA.

The ability of QCA to identify combinations of conditions that are likely to be 'necessary' and/or 'sufficient' for a particular outcome of interest to occur is useful for developing or testing theories and models. For example, knowledge about a positive health behavior may be necessary, but it is rarely sufficient to ensure that individuals will perform the health behavior. According to the Health Belief Model ( Janz & Becker, 1984 ), individuals often require a combination of the following conditions in order to perform a positive health behavior: 1) knowledge about the behavior; 2) high level of perceived threat to their health if they fail to perform the behavior; 3) high-level of perceived benefits to performing the behavior; and 4) low-level of perceived barriers to performing the behavior. The ability to identify this type of “causal complexity” is one reason why QCA can be useful when generating or testing theoretical models ( Ragin, 1987 ).

Structural equation modeling (SEM) is a more commonly used analytic technique that also allows researchers to incorporate multiple variables and test theoretical models. Although SEM may arguably be easier to use than QCA ( Hawley, 2007 ), SEM requires large samples and the results are interpreted in a reductionist manner by considering the influence that one variable has on the outcome while holding all other variables in the model constant. Furthermore, unlike QCA, SEM and other inferential statistical techniques typically fail to consider the possibility of equifinality, whereby different combinations of conditions can lead to the same outcome ( Ragin, 1987 ; Rihoux & Ragin, 2009 ). For example, the combination of knowledge about how to perform a behavior along with a high level of perceived benefits may be sufficient to elicit a positive health behavior among a subset of women who do not face a particular barrier; however, additional or different conditions may be needed to elicit the behavior among other individuals. If a key factor is relevant to the outcome for only a subset of individuals, the correlation between the factor and outcome is weakened, potentially causing what may be a key factor to be deemed insignificant if inferential statistics are used. Additionally, inferential statistics assume that the influence of variables is symmetrical even though conditions that lead to the consistent performance of a health behavior may be different from conditions that cause poor adherence to the behavior.

Despite several relative advantages to QCA, the extent to which this hybrid analytic approach has diffused and been adopted across academic disciplines remains unclear. Thus, the first objective of this article is to explore the diffusion and adoption of QCA through health research channels and mixed methods researchers. To achieve this objective, results are presented from a literature search of articles indexed by PubMed and articles published in the Journal of Mixed Methods Research . The second objective is to discuss several potential reasons for the diffusion and adoption rates of QCA. Subsequently, to promote the broader goal of active QCA dissemination, the final objective is to increase knowledge of QCA and decrease perceived complexity. To achieve the final objective, data obtained as part of a multiple-case study are used to demonstrate how to perform csQCA and to illustrate benefits and limitations of this technique.

Diffusion and Adoption of QCA

In April of 2014, the index term “qualitative comparative analysis” was used for online searches of articles indexed by PubMed or published in the Journal of Mixed Methods Research ( JMMR ). Abstracts of all articles retrieved using the designated search term and published in or after 1987 (when QCA was developed) were reviewed. Articles were initially counted if the authors used any of the three QCA types mentioned previously (i.e., crisp-set, fuzzy-set, or multi-value) in an original research study or with hypothetical data. Given the paucity of articles, criteria were extended to include any articles where the authors described or mentioned QCA in order to evaluate contexts in which this method has been discussed.

Only 30 articles meeting the initial inclusion criteria had been indexed by PubMed as of April 2014, with 29 of them reporting data from an original research study and one that used hypothetical data. After expanding the criteria, two additional PubMed articles were identified. Among the latter articles, one mentioned QCA, along with a few other “new techniques”, as a potential way to help advance research in topic areas of stress, coping, and social support ( Thoits, 1995 ); and the other described QCA and several other methods used in synthesizing qualitative and quantitative evidence ( Dixon-Woods, Agarwal, Jones, Young, & Sutton, 2005 ).

Only one article published in JMMR as of April 2014 met the initial inclusion criteria, but 8 met expanded criteria. The single JMMR article meeting initial criteria reported how a large- N survival analysis and small- N QCA yielded new insights about the reasons for project delay in various organizations ( Krohwinkel, 2014 ). One of the articles meeting expanded criteria was a book review by Hawley (2007) . An additional seven articles mentioned QCA during discussions on various topics including: integration, synthesis, and triangulation in mixed methods research ( Bazeley, 2009 ; Bazeley & Kemp, 2012 ; Sandelowski, Voils, Leeman, & Crandell, 2012 ; Wolf, 2010 ); qualitative data analysis tools ( Onwuegbuzie, Bustamante, & Nelson, 2010 ); data analysis as a process of interpretation ( Van Ness, Fried, & Gill, 2011 ); or lack of experimentation with innovative methods such as QCA ( Boeije, Slagt, & van Wesel, 2013 ).

Although this literature search was limited in scope, the articles reveal diverse contexts in which QCA has been utilized either as the sole analytic technique or less commonly to complement other analytic techniques. Articles posited contrasting views in terms of which end of a qualitative/quantitative spectrum researchers classify QCA. Finally, this review substantiates the assertion that QCA has been slow to diffuse into health research, but also suggests that the rate at which QCA is being adopted in health research may be increasing over time. Support for this latter assertion comes from the finding that half of the QCA articles identified in PubMed were published after 2011.

Potential Reasons for the Slow Diffusion of QCA

Diffusion of Innovations Theory ( Rogers, 2003 ) provides several possible explanations for these findings. First, an innovation takes time to diffuse within and across social groups and the rate of diffusion is dependent on communication channels. QCA was developed in the late 1980's by Charles Ragin, a Sociologist who studies politics ( Ragin, 1987 ). QCA therefore had to spread across members of Political Science and Sociology disciplines through a limited number of communication channels into other disciplines. Second, QCA may be viewed by some researchers as being incompatible with the methodological paradigm to which they may still subscribe ( Barbour, 1998 ). “Qualitative” researchers might view QCA as incompatible because it is based on Boolean algebra and a computer program is typically used to aid the researcher in identifying solutions which are then evaluated using quantitative measures called solution consistency and coverage. Whereas “quantitative” researchers may view QCA as incompatible because it entails an iterative process of evaluating data, often uses a non-random sample, and requires researchers to use their substantive knowledge of the cases to make several 'subjective or interpretive' decisions at multiple points during the analysis ( Benoît Rihoux & Ragin, 2009 ). Third, knowledge about how QCA works may be limited as there appear to be a relatively small number of researchers who have been trained to conduct QCA. Fourth, performing QCA was complex until computer software became widely available and automated much of the process. Nevertheless, Hawley (2007) has pointed out that the unique terminology used in QCA also makes learning this technique inherently difficult. Furthermore, additional complexities have arisen as researchers have developed several different types of QCA ( Rihoux & Ragin, 2009 ).

Given that mixed methods researchers generally take a pragmatic approach that transcends the positivist/constructivist or quantitative/qualitative “paradigm wars” (Morgan, 2007), findings from the JMMR review, which suggested that few mixed methods researchers have adopted QCA, were unexpected. Hawley’s (2007) description of QCA in the book review published in JMMR suggests that high perceived complexity and lack of relative advantage over other techniques may explain the slow diffusion and low adoption rates. Therefore, to reduce complexity, the following section provides a stepwise account of how QCA was highly instrumental as an initial step in a multiple-case study designed to evaluate the implementation and effectiveness of universal colorectal tumor screening programs to identify Lynch syndrome.

QCA Example

Background on universal tumor screening (uts) for lynch syndrome.

Lynch syndrome, the most common cause of hereditary colorectal cancer (CRC), confers a 50–70% lifetime risk of colorectal cancer (CRC) (Barrow et al., 2008; Hampel, Stephens, et al., 2005; E. Stoffel et al., 2009) as well as increased risks for other cancers (Barrow et al., 2009; Hampel, et al., 2005; Stoffel et al., 2009; Watson et al., 2008). Universal tumor screening (UTS) is the process whereby tumors from all newly diagnosed CRC patients are screened to identify those patients who may have Lynch syndrome ( Bellcross et al., 2012 ). Over 35 cancer centers and hospitals across the U.S have implemented UTS, but substantial variability in protocols and procedures exist across institutions ( Beamer et al., 2012 ; Cohen, 2013). Outcomes also vary across institutions as noted by large differences in the percentage of patients with a positive screen who follow-through with genetic counseling and germline genetic testing ( Beamer et al., 2012 ; Lynch, 2011 ; South et al., 2009 ; Cragun et al., 2014 ). Considering the critical importance of patient follow-through to the successful identification of family members with Lynch syndrome and subsequent prevention or early detection of cancers ( Bellcross et al., 2012 ), a multiple-case study was initiated to identify institution-level conditions that might contribute to the wide variability in patient follow-through.

Study Design

Upon approval from the Institutional Review Board at the University of South Florida, a multiple-case study was initiated during the fall of 2012. The rationale for employing a multiple-case study design was based on the following (adapted from Yin, 2008 ): (a) the key objective was to provide a detailed understanding of a complex phenomenon (i.e. UTS program implementation and patient follow-through) for which there is limited data; (b) the purpose was to answer how and why questions; (c) the behavior of those involved could not be manipulated; and, (d) it was hypothesized that contextual conditions would be relevant to variations in patient follow-through. The current article uses data from an online survey of institutional representatives. However, additional data were collected through a six-month follow-up survey and interviews with institutional representatives and other personnel involved in UTS implementation at participating institutions. Details regarding the overall study design have been published elsewhere ( Cragun et al., 2014 ).

Conceptual Frameworks

The RE-AIM evaluation framework and the Consolidated Framework for Implementation Research (CFIR) ( Damschroder et al., 2009 ; Glasgow, Vogt, & Boles, 1999 ) served as the conceptual framework for the multiple-case study. RE-AIM is comprised of five evaluation dimensions ( Follow-through, Effectiveness, Adoption, Implementation, and Maintenance) that assist with identifying conditions for multi-level comprehensive evaluations ( Glasgow, Klesges, Dzewaltowski, Estabrooks, & Vogt, 2006 ). In the current study the RE-AIM evaluation dimensions were defined as follows:

  • Reach : the absolute number, proportion, and representativeness of CRC patients screened.
  • Effectiveness : the impact of UTS on outcomes (including patient follow-through with genetic counseling and germline genetic testing after a positive screen and potential negative effects).
  • Adoption : the absolute number, proportion, and representativeness of institutions and staff who implement UTS.
  • Implementation : the consistency of delivery, time and cost of the UTS program and what adaptations are made in various settings.
  • Maintenance : the effects of UTS over time with regard to both the institution and patients.

RE-AIM was selected based on the expectation that it would increase the quality, speed, and impact of stakeholder efforts to more effectively translate UTS into practice. The CFIR provided a framework for exploring the Implementation dimension of RE-AIM and identify conditions that might influence Effectiveness . Table 1 lists the five CFIR domains and several conditions within each ( Damschroder et al., 2009 ).

Five Domains of the Consolidated Framework for Implementation Research (CFIR)

Data analysis and interpretation were influenced by the RE-AIM and CFIR frameworks as well as by the following assumptions: 1) UTS implementation experiences will differ across institutions; 2) despite implementation heterogeneity, QCA can identify patterns of conditions that are consistently associated with high patient follow-through (i.e., program effectiveness); 3) Results of QCA, in conjunction with detailed knowledge of each institution’s unique experiences, can be used to propose a “causal model” explaining differences in patient follow-through across institutions; 4) this model can inform implementation recommendations that are expected to improve program effectiveness; 5) research is an iterative process and alternative ways to achieve high patient follow-through may later be identified, thereby necessitating changes to the model. Based on these assumptions, this study methodology is pragmatic rather than being rooted in any of the competing epistemological or methodological paradigms (i.e., positivist vs. constructivist or quantitative vs. qualitative) (Morgan, 2007).

Study Participants

Representatives for the Lynch Syndrome Screening Network (LSSN) who worked at various institutions that perform UTS were recruited using methods detailed in a previous manuscript ( Cragun et al., 2014 ). After reviewing a consent form, participants completed an initial survey. Fifteen participants met the following a priori inclusion criteria: 1) represented institutions that had been performing Lynch syndrome screening on tumors from all newly diagnosed colorectal cancer (CRC) patients for at least six months; and 2) had access to institutional data on patient follow-through with genetic counseling and germline genetic testing. Patient follow-through data were collected by each institutional representative and provided in aggregate form to maintain strict patient anonymity. Names of institutions and institutional representatives were de-identified to maintain confidentiality.

The key study variables were derived from institution-level data that were collected through surveying institutional representatives. The initial online survey was developed, with input from several experts in cancer genetics and behavioral science, using the RE-AIM and CFIR frameworks as well as the researchers’ knowledge of institutional variations in UTS protocols. Information collected included: a) length of time UTS had been performed at the institution; b) details on the implementation process, protocol, and procedures; c) percentage of patients who undergo genetic counseling and percentage who undergo germline genetic testing; and, d) additional conditions within CFIR domains that may have helped facilitate or impede patient follow-through after a positive tumor screen. Details pertaining to survey content, validation, and piloting are reported elsewhere ( Cragun et al., 2014 ).

Crisp-Set Qualitative Comparative Analysis (csQCA)

In the current study, QCA was used to identify facilitators, barriers or other conditions unique to institutions that reported relatively high patient follow-through; thus the unit of analysis was at the level of the institution. Crisp-set QCA (csQCA) was chosen for two main reasons: 1) the conditions assessed in the survey were dichotomous; and 2) csQCA is simpler to perform and interpret than other QCA techniques ( Benoit Rihoux & De Meur, 2009 ). Steps used to perform csQCA in the current example are summarized in Table 2 and detailed in the next sections. In practice these steps are somewhat fluid as QCA is an iterative (rather than linear) process that allows modifications to be made as researchers gain additional information and insights into the cases ( Ragin, 1987 ; Benoit Rihoux & De Meur, 2009 ; Benoît Rihoux & Ragin, 2009 ). Briefly, steps 1–3 are needed to prepare data for use in QCA. Step 4 involves deciding which type of analyses to perform. Steps 5–9 describe how to determine which conditions are sufficient for the outcome. Finally, solutions are interpreted to propose “causal models” in step 10. Screenshots illustrating steps to using fsQCA2.0 software are available online in supplementary figures .

Summary of Steps Used to Perform Crisp-set Qualitative Comparative Analysis (csQCA)

Step 1: Outcome operationalization and set membership scoring

The outcome (i.e., patient follow-through) was operationalized using two questions assessing the percentage of patients who follow-through with genetic counseling and percentage who follow-through with germline genetic testing after a positive tumor screening result. Response options were the same for both questions: 1 = <10%; 2 = 11–25%; 3 = 26–40%; 4 = 41–55%; 5 = 56–70%; 6 = 71–85%; and 7 =>85%. The ordered categorical response options for the two questions were averaged to create a patient follow-through (PF) score ranging from 1–7. After arranging cases in descending order by PF, two natural breaks were identified ( Table 3 , column 1). The first 5 cases were grouped into the High-PF set, the second 5 cases into the Medium-PF set, and the last 5 into the Low-PF set. Natural breaks were chosen to ensure that cases with very similar values were grouped together, as has been recommended by Rihoux & De Meur (2009) .

Data Matrix of Conditions Considered for Inclusion in QCA

One key limitation of csQCA is that all variables, including the outcome, need to be dichotomized so that the case either belongs to the set (coded as 1) or does not belong to the set (coded as 0). In the current study the threshold for inclusion in the High-PF set was a PF score ≥5. All other cases did not belong in the High-PF set. Cases not in a set are referred to by placing a tilde before the abbreviation (e.g., ~High-PF).

Step 2: Case selection

Although QCA has been used to analyze data from random samples, it was developed to compare cases that are carefully chosen using one of a number of different selection procedures ( Gerring, 2007 ). In the current study, institutions representing High-PF and Low-PF sets were needed to determine conditions contributing to wide variability in patient follow-through across institutions. To maximize both sample size and diversity in conditions, all cases that met minimal inclusion criteria were dichotomized according to membership in the High-PF set and used in the analysis.

Step 3: Selection of key conditions

Although many CFIR constructs were measured to assist in gaining an in-depth understanding of each case, only a relatively small number of conditions could be used in QCA for two main reasons. First, the number of possible configurations increases exponentially according to an increase in the number of conditions; and this increases the likelihood that there will be a number of configurations for which there are no cases (i.e., remainders). Second, when the ratio of conditions to cases is high, the probability of getting a solution that just by chance appears sound even when the model is misspecified increases ( Marx & Dusa, 2011 ). Guidelines from a simulation study by Marx and Dusa (2011) were therefore followed by limiting analyses to no more than 4 conditions so that misspecification of the model would most likely lead to contradictory cases (i.e., cases with the same configuration of conditions, but different outcomes).

In the current study, processes related to disclosure of screening results and discussion of genetic testing as well as the individuals involved with these processes were hypothesized to have the most direct influence on patient follow-through. As a first step in narrowing down the number of conditions to consider for QCA, a data spreadsheet of responses from each institutional representative was created by the researchers with cases organized from highest to lowest PF. Frequencies of responses were then generated for each PF category (i.e., High-PF, Medium-PF, Low-PF). Conditions were evaluated by the researchers in terms of how each might relate to patient follow-through independently or in combination with other conditions. During the selection process the researchers created a data matrix ( Table 3 ) of set membership scores for the conditions considered for inclusion in QCA. The data matrix was then reviewed by the researchers to narrow down the list of conditions. This process consisted of a series of decisions described in more detail below whereby similar pairs of conditions were combined to create composite conditions and several conditions were later deleted.

General differences between PF groups were found with regard to who discloses positive screening results to patients. All representatives of the five High-PF institutions reported that a genetics professional discloses abnormal screening results to patients. There were also two Medium-PF institutions where a genetics professional discloses positive results. This condition was included in QCA and is referred to as (gen_prof_disclose_screen). How positive results were first disclosed (i.e., by phone or at a follow-up visit) was mixed across the PF groups; and was subsequently deleted from the data matrix. Nearly all institutions have genetics professionals provide pretest counseling prior to germline genetic testing. Consequently, this condition was deleted from the data matrix due to lack of variability ( Benoit Rihoux & De Meur, 2009 )

Several conditions that could act as barriers to patient follow-through with genetic counseling and germline genetic testing were also considered. Most ~High-PF institutions reported that obtaining a referral from a healthcare provider was the primary mechanism for the patient to receive genetic testing and was coded as (referral_barrier) for use in QCA. Other barriers demonstrated similarities in response patterns. Therefore analogous pairs of barriers were combined using the Boolean operator “OR”, which indicates Boolean addition. As an example, the new composite condition (difficulty_contact_pt) was “present” and coded as “1” if either (a) the institutional representatives indicated that difficulty contacting patients to set up genetic counseling was a barrier “OR” (b) that difficulty contacting patients to set up germline genetic testing was a barrier. Whereas if neither of these barriers were reported, then the new composite condition was considered “absent” and coded “0”.

The revised data matrix contained three conditions selected for inclusion in QCA (gen_prof_disclose_screen, referral_barrier, and gen_directly_contacts_pt) as well as several additional barriers to consider. Once complete, the principal investigator saved the data matrix (which was in an Excel spreadsheet) as a .csv file because this type of file can be opened and read by fsQCA2.0 software using the point and click FILE menu (Ragin et. al, 2006).

Step 4: Decide which analyses to run

While the focus of QCA is often on identifying conditions that are sufficient for the presence of an outcome, researchers have suggested that sufficiency analysis be preceded by identifying potential necessary conditions ( Schneider & Wagemann, 2010 ). A necessary condition is one that occurs in all cases that demonstrate the presence of the outcome. There are many instances where a theory or previous empirical observations would lead researchers to hypothesize that certain conditions may be either 1) necessary and sufficient for an outcome or 2) necessary but insufficient for an outcome. However, in the current study, none of the conditions were originally hypothesized to be necessary for achieving high PF. Therefore, only analyses to determine sufficiency were performed.

FsQCA 2.0 software developed by Charles Ragin was chosen to run the sufficiency analyses as it is freely available for download online at http://www.fsqca.com along with a user manual (Ragin et. al, 2006). However, to help decrease perceived complexity, basic steps performed in the current study are described below. Also, to reduce complexity, key QCA terms are defined and illustrated throughout the following step-by-step description, but QCA jargon is used sparingly.

Step 5: Create a truth table

Using fsQCA software, “Truth Table Algorithm” was selected under the ANALYSE > Crisp sets menu. The outcome and conditions were chosen as prompted in the pop-up window before clicking the “run” button. The software then created a truth table similar to the replica in Table 4 . Each row of the truth table shows a configuration of conditions and lists the number of cases that share that configuration. As is often the case, several configurations had no case examples (rows E-H); and these are called remainders ( Benoît Rihoux & Ragin, 2009 ).

Initial Truth Table of All Potential Conditional Configurations

Notes: This is a replica of the initial truth table generated using fsQCA 2.0 software. However, the first column was added to label configurations and several descriptors were added in parentheses.

Step 6: Examine the truth table and resolve contradictions

The objective when creating a truth table is to ensure that all cases that share a configuration also share the same outcome. The consistency score for each row indicates the proportion of cases in the respective configuration that belong to the High-PF set (i.e., outcome is present). When the consistency is 1 it indicates that the configuration of conditions is always associated with the presence of the outcome. In the initial truth table ( Table 4 ) generated for the current study, rows A and B have consistency scores of 0.8 and 0.5, respectively. This suggests that these rows represent configurations where the outcome is inconsistent. Specifically, row A represents a configuration that is shared by 4 High-PF cases and 1 Medium-PF case; and row B represents a configuration that is shared by 1 High-PF case and 1 Medium-PF case. The need to resolve such contradictions often occurs in QCA ( Marx & Dusa, 2011 ). Contradictions provide researchers an opportunity to gain additional understanding of the cases and serves as a mechanism for building models ( Ragin, 2004 ). For example, contradictions could indicate that a key condition is missing from the model.

To resolve the contradictions, the research team went back to the reduced data matrix to examine the cases and then select another key barrier. Logic dictated that difficulty contacting patients after a positive screen (difficulty_contact_pt) would directly lower PF. Once this condition was added, the new truth table contained no contradictions ( Table 5 ). The consistency scores for the first two configurations (rows A-B) were 1 and the consistency scores for the other configurations (rows C-F) were 0. Thus, the outcomes of the first two configurations (rows A-B) were coded 1 by the researchers and the outcomes of all the other configurations for which there were cases (rows C-F) were coded 0. Table 5 does not show configurations for which there were no cases (i.e., remainders), as these configurations were deleted before running a standard analysis.

Revised Truth Table

Notes: The revised truth table was created using fsQCA 2.0 software by adding a fourth condition to the original truth table, assigning outcome scores for each configuration, and deleting configurations with no cases (remainders).

Step 7: Use software to generate solutions

Although the final truth table ( Table 5 ) is quite revealing in terms of which contextual conditions are associated with High-PF, it can be helpful to have the computer software generate three solutions (complex, parsimonious, and intermediate), particularly when truth tables are large, multiple different configurations are associated with the same outcome, or fsQCA (in which outcomes and/or conditions are not dichotomized) is used instead of csQCA. As part of the current study, the researchers ran a “Standard Analysis” by clicking this option in the menu at the bottom of the window. The computer software used the Quine-McCluskey algorithm (which is based on Boolean simplification) to make multiple comparisons of case configurations and logically simplify the data (Ragin et. al, 2006). The idea behind this minimization procedure is that if two configurations differ in only one condition, yet produce the same outcome, then the condition that distinguishes the two configurations can be considered irrelevant to the outcome and removed to create a simpler expression.

The fsQCA2.0 software determines three solutions. The first is the complex solution, which is determined by the computer through minimizing only those configurations for which cases are available (i.e., remainders are not used to make simplifying assumptions). When there are multiple conditions or multiple configurations leading to the presence of the outcome, this solution may be so complex that it is not very useful. This is why the software generates a parsimonious and intermediate solution with input from the researchers.

To determine the most parsimonious solution, the software makes assumptions about what the outcome might be for the configurations that do not have cases (i.e., remainders) and uses these remainders to further simplify the expression (Ragin et. al, 2006). During the minimization process in the current study, a “prime implicant chart” appeared on the screen. A prime implicant chart appears when there are multiple ways of simplifying a solution (Ragin et. al, 2006). In order to obtain the most parsimonious solution, researchers must choose one prime implicant to cover each configuration in the chart. In the notation for prime implicants, the tilde (~) indicates the condition is absent. An asterisk (*) indicates Boolean “AND” (meaning that the conditions joined by * must both be present). The prime implicant chart in the current study showed that the configurations for the High-PF cases could be simplified in two different ways: (a) ~referral_barrier * ~difficulty_contact_pt; or (b) gen_prof_disclose_screen * ~difficulty_contact_pt. Despite an inability to make a compelling argument for choosing one prime implicant over the other, in the current study the researchers chose the first prime implicant so that the software would continue the analysis. In some instances (such as the current study) the prime implicant chosen to create the parsimonious solution does not influence the researchers’ final interpretation because they should reject the parsimonious solution if they cannot use logic and knowledge of the topic to substantiate all of the simplifying assumptions upon which the parsimonious solution is based ( Ragin, 2004 ).

Even though all assumptions underlying the parsimonious solution cannot always be reasonably justified by the researchers, certain assumptions might be easy for the researchers to substantiate to create an intermediate solution; these are referred to as “easy counterfactuals” ( Ragin, 2004 ). As part of the analytic process, the computer software automatically opens another window so that researchers can decide which simplifying assumptions are reasonable. In order for the software to generate the intermediate solution in the current study, the following logic-based assumptions were selected:

  • Absence of each barrier (i.e., ~difficulty_contact_pt and ~referral_barrier) will contribute to High-PF, but the presence of each barrier will not contribute to High-PF.
  • Involvement of a genetic professional in the disclosure of screening results (gen_prof_disclose_screen) and in directly contacting the patient to arrange genetic counseling and testing (gen_directly_contacts_pt) will contribute to High-PF, while lack of involvement by genetics professionals will not be associated with High-PF.

Step 8: Determine if the influence of conditions is symmetrical

The combinations of conditions associated with High-PF may differ from those associated with less successful outcomes. In the real world there are often more pathways that lead to the failure of a health program than there are leading to successful programs. Because QCA is not based on correlations, it does not assume that conditions will have a symmetrical influence. To illustrate this point, QCA steps 4–6 were repeated using the absence of High-PF (~High-PF) as the outcome. During this analytic process the latter of the following two prime implicants was chosen to be consistent with the initial analysis: (a) ~gen_prof_disclose_screen or (b) referral_barrier. Assumptions made to generate the intermediate solution were the inverse of the assumptions chosen for the first analysis (i.e., presence of barriers would contribute to ~High-PF, and absence of involvement by genetics professionals would contribute to ~High-PF).

Step 9: Evaluate consistency and coverage scores for the solutions

Consistency and coverage are interpreted differently when determining whether conditions are necessary versus when determining if they are sufficient. When performing sufficiency analyses, as in the current example, solution consistency should be close to 1 in order for researchers to conclude that the combination(s) of conditions in the solution is(are) almost always associated with the outcome of interest ( Ragin, 2004 ). A solution coverage of 1 indicates that all cases with the outcome of interest are represented by at least one of the combinations of conditions in the solution. When there are multiple combinations of conditions within a solution, raw and unique coverage can be used by the researcher to assess the importance of each combination of conditions and the extent to which a case is covered by more than one combination of conditions.

Step 10: Interpret the resulting solutions and create causal models

Even if conditions are consistently associated with an outcome, it does not mean they cause the outcome. However, researchers can use solutions in conjunction with theory, conceptual frameworks, and detailed knowledge about the cases to develop causal models that help unpack potential mechanisms leading to the outcome ( Ragin, 2004 ). In the current study the researchers used their substantive knowledge of UTS and conceptual framework (i.e., CFIR) to interpret the solutions and piece together key conditions to create tentative models that were intended to be modified as additional details about the cases were obtained.

QCA Results

Table 6 lists the complex, parsimonious, and intermediate solutions from the first csQCA analysis performed to determine conditions associated with High-PF. The parsimonious solution was rejected because all of the simplifying assumptions could not be substantiated. The model was based on the intermediate solution, which in this case, happened to be the same as the complex solution. This intermediate solution is interpreted as meaning that all of the following three conditions are together sufficient for High-PF: 1) a genetics professional discloses the results of positive tumor screening to patients; AND 2) a referral from another health care provider is not the primary mechanism for the patient to receive testing; AND 3) difficulty contacting patients is not a barrier. This combination of three conditions is unique only to the High-PF cases, which is why the consistency score is 1. The coverage score of 1 verifies that that this combination of three conditions characterizes (covers) all 5 cases that belong to the High-PF set.

QCA Solutions, Consistency and Coverage

Notes: A tilde (~) indicates the absence of the outcome or condition.

The intermediate solutions are bolded because they were determined to be the most theoretically sound and not overly simple or complex.

The bottom of Table 6 presents all three solutions for the absence of the outcome (i.e.,~High-PF). The three solutions were all different; thus, the causal model was based on the intermediate solution because it was not too simple, but made more logical sense than the complex solution. The intermediate solution for absence of High-PF (i.e., ~High-PF) revealed two distinct sets of conditions that were both associated with the absence of the outcome ( Table 6 ). The intermediate solution can be interpreted as meaning that difficulty contacting patients who screen positive is sufficient but not necessary to prevent PF. Alternatively the following three conditions are together sufficient to prevent PF: genetic professionals do not disclose positive screening results, AND genetic counselors do not contact patients directly to arrange genetic counseling and testing, AND health care provider referral is the key mechanism for patients to receive genetic testing. The consistency of the intermediate solution was 1, indicating there were no contradictory cases. The coverage score of 1 indicates that all cases without high-patient follow-through (~High-PF) fit one or both of the combinations in the solution. The raw coverage for the first configuration (i.e., difficulty contacting patients) was 0.3, indicating that the presence of this barrier distinguished 3 of the 10 ~High-PF cases from the High-PF cases. The unique coverage for this configuration was lower (0.2) because 1 of the 3 institutions with difficulty contacting patients also shared the second combination of conditions that uniquely covered the other ~High-PF cases ( Table 6 ).

QCA was used in the current multiple-case study to formulate tentative causal models for explaining high variability in patient follow-through across institutions that have implemented a universal tumor screening program to identify patients with Lynch syndrome. QCA solutions provided key insights into how program implementation may contribute to program effectiveness. In other words, QCA identified conditions associated with relatively high levels of patient follow-through with genetic counseling and germline testing after a positive tumor screen. QCA was also useful in identifying additional questions to explore as part of the ongoing multiple-case study. For example, why did representatives from the five High-PF institutions report no difficulty contacting patients? In addition, what may prevent stakeholders at Low-PF or Medium-PF institutions from: (a) altering UTS procedures so that genetics professionals contact patients to disclose positive screening results; and (b) eliminating the need for a referral? Insights gained from QCA therefore informed the creation of semi-structured interview guides and follow-up surveys. Subsequently, follow-up data have been used, in conjunction with QCA results, to develop a more complete mechanistic model of how implementation conditions are likely influencing patient follow-through. This model has since been published and used as evidence to support changes in UTS procedures ( Cragun et al., 2014 ). Furthermore, these changes have already led to improvement in patient follow-through at one institution based on personal communication with the institutional representative.

Despite their many uses, models created using QCA may be overly simplistic or incomplete. For example, findings from this case study do not preclude the possibility that other combinations of conditions could lead to high patient follow-through (High-PF) at institutions that were not studied. Indeed one advantage of QCA is that it can identify multiple different “recipes” for success. Subsequently, as more information is obtained and as additional institutions performing UTS are identified it is likely that the model will be expanded and modified further.

Several other criticisms that researchers level at QCA originate from what Morgan (2007) referred to as the “paradigm wars”. For instance, researchers who view QCA using a “quantitative” lens might consider performing multiple analyses on the same data to be problematic. However, multiple analyses are consistent with the iterative nature of QCA. Furthermore, determining which conditions are associated with both the presence and absence of the outcome is considered good practice by QCA researchers ( Schneider & Wagemann, 2010 ) as it can provide additional insights into the underlying mechanisms and can add to the credibility of the proposed models. Several other concerns that critics raise, such as the use of purposive sampling, are also unproductive from a pragmatic perspective. Nevertheless, several practical limitations are worth mentioning.

One limitation of QCA is the potential for measurement error and case misclassification. The current study was based on self-reported data from a single individual on behalf of their institution and may contain inaccuracies or bias. Furthermore, the use of natural breaks for set membership scoring may result in misclassification. For example, an open-ended survey response from the institutional representative of a Medium-PF institution revealed that this institution may instead belong in the High-PF set due to a unique difference in this institution’s protocol that may have led to an underestimation of patient follow-through. This institution had the highest patient follow-through among the Medium-PF set and was similar to High-PF institutions in several key ways. However, the representative reported difficulty contacting patients as a barrier. Given that difficulty contacting patients was sufficient to prevent High-PF under the current model, reclassification of this institution into the High-PF set would unveil a contradiction that would need to be resolved through modifications to the model based on additional information.

The measurement issue described above illustrates another limitation of csQCA, whereby conditions and outcomes must be dichotomized. In contrast, fsQCA overcomes this limitation by allowing the researcher to code the outcome and/or conditions on a calibrated scale from 0 to 1. This fuzzy-score represents the extent to which a case falls within the set rather than being fully in or fully out of a set ( Rihoux & Ragin, 2009 ). The resulting advantages of fsQCA over csQCA include the ability to maintain variation and to more accurately represent social reality when outcomes and/or conditions are not truly dichotomous. Although bias and measurement error may remain a concern, using fsQCA may lead the researcher to assign a set membership score that is off by only a small degree rather than misclassifying it into the opposing set; and this is expected to have a smaller impact on the results. Unfortunately, the advantages of fsQCA also make it more complicated than csQCA.

There are other limitations to this study that do not result directly from the use of QCA. First, the use of aggregated institution-level data, rather than raw patient-level data, did not allow us to assess for associations between individual-level factors and patient follow-through. This is clearly a limitation of our model ( Cragun et al., 2014 ). To tease out the relative influence of institution-level and individual-level factors on patient follow-through, it will be necessary to collect individual-level data in a systematic fashion from a larger number of institutions so that multi-level modeling can be employed. In fact, our anticipated sample size of fewer than 20 institutions would have provided insufficient statistical power for multi-level modeling even if individual-level data had been available, thus QCA was our best option for this study. Power limitations also prevented us from performing other types of inferential statistics including structural equation modeling.

Although rooted in a qualitative paradigm, QCA may appeal to researchers or journal editors that prefer “quantitative” methods because QCA: (a) takes a logical and mathematical approach; (b) can be used to analyze small, medium, and large data sets; (c) provides a tool for identifying causal complexity and equifinality; (d) allows the researcher to generate solutions (with the aid of a computer program); and (e) calculates measures to evaluate the merit of the solutions (i.e., solution consistency and coverage). Given that QCA confers several advantages over other techniques, one of the purposes of this article was to encourage its active diffusion across mixed methods research channels. This article has attempted to reduce perceived complexity of QCA by illustrating how to perform the simplest type of QCA (i.e., csQCA). The example presented demonstrates how QCA aids in systematically identifying and simplifying key conditions that are uniquely associated with an outcome of interest. Although the use of cross-sectional data inhibits the ability to demonstrate causation, QCA provides solutions that researchers can use to propose logical mechanisms by which key conditions may act together to facilitate or impede outcomes. The iterative nature of QCA allows the researcher to gain an in-depth understanding of multiple cases and modify “causal” models as additional information is discovered.

For those researchers who are new to QCA and/or mixed methods research, we recommend they review a broad array of prior studies that have used various techniques, regardless of whether the topic areas align with their own research interests. It is our opinion that examples from other researchers are a great way to learn and apply new techniques that can advance research across disciplines and topical areas.

QCA and other techniques that fuse qualitative and quantitative methods (Bazeley, 1999) provide an opportunity to help in bridging the gap that “paradigm wars” have created. Ultimately, we believe researchers should first consider how resources or other conditions may limit the type of data they can feasibly obtain to answer their research questions and then choose one or more of a wide variety of analytic tools based on how well-suited the tools are for answering their specific research questions. To that end, QCA is another tool that mixed methods researchers may find useful.

Supplementary Material

Supplemental file with screenshots.

  • Barbour RS. Mixing qualitative methods: quality assurance or qualitative quagmire? Qualitative Health Research. 1998; 8 (3):352–361. [ PubMed ] [ Google Scholar ]
  • Bazeley P. Editorial: Integrating data analyses in mixed methods research. Journal of Mixed Methods Research. 2009; 3 (3):203–207. [ Google Scholar ]
  • Bazeley P, Kemp L. Mosaics, triangles, and DNA metaphors for integrated analysis in mixed methods research. Journal of Mixed Methods Research. 2012; 6 (1):55–72. [ Google Scholar ]
  • Beamer LC, Grant ML, Espenschied CR, Blazer KR, Hampel HL, Weitzel JN, MacDonald DJ. Reflex immunohistochemistry and microsatellite instability testing of colorectal tumors for Lynch syndrome among US cancer programs and follow-up of abnormal results. Journal of Clinical Oncology. 2012; 30 (10):1058–1063. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Bellcross CA, Bedrosian SR, Daniels E, Duquette D, Hampel H, Jasperson K, Khoury MJ. Implementing screening for Lynch syndrome among patients with newly diagnosed colorectal cancer: summary of a public health/clinical collaborative meeting. Genetics in Medicine. 2012; 14 (1):152–162. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Boeije H, Slagt M, van Wesel F. The contribution of mixed methods research to the field of childhood trauma: A narrative review focused on data integration. Journal of Mixed Methods Research. 2013; 7 (4):347–369. [ Google Scholar ]
  • Cohen SA. Current Lynch syndrome tumor screening practices: A survey of genetic counselors. Journal of Genetic Counseling. 2014; 23 (1):38–47. [ PubMed ] [ Google Scholar ]
  • Cragun D, Debate RD, Vadaparampil ST, Baldwin J, Hampel H, Pal T. Comparing universal Lynch syndrome tumor-screening programs to evaluate associations between implementation strategies and patient follow-through. Genetics in Medicine. 2014 [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Damschroder LJ, Aron DC, Keith RE, Kirsh SR, Alexander JA, Lowery JC. Fostering implementation of health services research findings into practice: a consolidated framework for advancing implementation science. Implementation Science. 2009; 4 50. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Dixon-Woods M, Agarwal S, Jones D, Young B, Sutton A. Synthesising qualitative and quantitative evidence: a review of possible methods. Journal of Health Services Research & Policy. 2005; 10 (1):45–53. [ PubMed ] [ Google Scholar ]
  • Gerring J. Case Study Research: Principles and Practices. Cambridge, UK: Cambridge University Press; 2007. [ Google Scholar ]
  • Glasgow RE, Vogt TM, Boles SM. Evaluating the public health impact of health promotion interventions: the RE-AIM framework. American Journal of Public Health. 1999; 89 (9):1322–1327. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Glasgow RE, Klesges LM, Dzewaltowski DA, Estabrooks PA, Vogt TM. Evaluating the impact of health promotion programs: using the RE-AIM framework to form summary measures for decision making involving complex issues. Health Education Research. 2006; 21 (5):688–694. [ PubMed ] [ Google Scholar ]
  • Hawley JD. [Review of the book Innovative comparative methods for policy analysis: Beyond the quantitative-qualitative divide, by B. Rihoux, & H. Grimm] Journal of Mixed Methods Research. 2007; 1 (4):390–392. [ Google Scholar ]
  • Janz NK, Becker MH. The Health Belief Model: a decade later. Health Education Quarterly. 1984; 11 (1):1–47. [ PubMed ] [ Google Scholar ]
  • Kahwati LC, Lewis MA, Kane H, Williams PA, Nerz P, Jones KR, Kinsinger LS. Best practices in the Veterans Health Administration's MOVE! weight management program. American Journal of Preventive Medicine. 2011; 41 (5):457–464. [ PubMed ] [ Google Scholar ]
  • Krohwinkel A. A configurational approach to project delays: Evidence from a sequential mixed methods study. Journal of Mixed Methods Research. 2014 [ Google Scholar ]
  • Lynch PM. How helpful is age at colorectal cancer onset in finding HNPCC? Diseases of the Colon & Rectum. 2011; 54 (5):515–517. [ PubMed ] [ Google Scholar ]
  • Marx A, Dusa A. Crisp-set qualitative comparative analysis (csQCA): Contradictions and consistency benchmarks for model specification. Methodological Innovations Online. 2011; 6 (2):103–148. [ Google Scholar ]
  • Miles MB, Huberman AM. Qualitative data analysis: An expanded sourcebook. Thousand Oaks, CA: SAGE; 1994. [ Google Scholar ]
  • Onwuegbuzie AJ, Bustamante RM, Nelson JA. Mixed research as a tool for developing quantitative instruments. Journal of Mixed Methods Research. 2010; 4 (1):56–78. [ Google Scholar ]
  • Ragin CC. Between complexity and parsimony: Limited diversity, counterfactual cases, and comparative analysis. 2004 Retrieved from http://escholarship.org/uc/item/1z567tt . [ Google Scholar ]
  • Ragin CC. The comparative method : moving beyond qualitative and quantitative strategies. Berkeley, CA: University of California Press; 1987. [ Google Scholar ]
  • Rihoux B, De Meur G. Crisp-set qualitative comparative analysis (csQCA) In: Rihoux B, Ragin C, editors. Configurational comparative methods: qualitative comparative analysis (QCA) and related techniques. Thousand Oaks, CA: SAGE; 2009. pp. 33–68. [ Google Scholar ]
  • Rihoux B, Marx A. QCA, 25 years after “the comparative method”: Mapping, challenges, and innovations—mini-symposium. Political Research Quarterly. 2013; 66 (1):167–235. [ Google Scholar ]
  • Rihoux B, Ragin CC. Configurational comparative methods: Qualitative comparative analysis (QCA) and related techniques. Thousand Oaks, CA: SAGE; 2009. [ Google Scholar ]
  • Rogers EM. Diffusion of Innovations, 5th Edition. New York, NY: Free Press; 2003. [ Google Scholar ]
  • Sandelowski M, Voils CI, Leeman J, Crandell JL. Mapping the mixed methods–mixed research synthesis terrain. Journal of Mixed Methods Research. 2012; 6 (4):317–331. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Schneider CQ, Wagemann C. Standards of good practice in qualitative comparative analysis (QCA) and fuzzy-sets. Comparative Sociology. 2010; 9 (3):397–418. [ Google Scholar ]
  • Shanahan MJ, Vaisey S, Erickson LD, Smolen A. Environmental contingencies and genetic propensities: social capital, educational continuation, and dopamine receptor gene DRD2. American Journal of Sociology. 2008; 114 (Suppl):S260–S286. [ PubMed ] [ Google Scholar ]
  • South CD, Yearsley M, Martin E, Arnold M, Frankel W, Hampel H. Immunohistochemistry staining for the mismatch repair proteins in the clinical care of patients with colorectal cancer. Genetics in Medicine. 2009; 11 (11):812–817. [ PubMed ] [ Google Scholar ]
  • Thoits PA. Stress, coping, and social support processes: Where are we? What next? Journal of Health and Social Behavior, Spec No, 53–79. 1995 [ PubMed ] [ Google Scholar ]
  • Van Ness PH, Fried TR, Gill TM. Mixed methods for the interpretation of longitudinal gerontologic data: Insights from philosophical hermeneutics. Journal of Mixed Methods Research. 2011; 5 (4):293–308. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Weiner BJ, Jacobs SR, Minasian LM, Good MJ. Organizational designs for achieving high treatment trial enrollment: a fuzzy-set analysis of the community clinical oncology program. Journal of Oncology Practice. 2012; 8 (5):287–291. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Wolf F. Enlightened eclecticism or hazardous hotchpotch? Mixed methods and triangulation strategies in comparative public policy research. Journal of Mixed Methods Research. 2010; 4 (2):144–167. [ Google Scholar ]
  • Yin RK. Case study research: Design and methods. 4th ed. Thousand Oaks, CA: SAGE; 2008. [ Google Scholar ]

Book cover

Systematic Mixed-Methods Research for Social Scientists pp 131–156 Cite as

Qualitative Comparative Analysis (QCA): A Classic Mixed Method Using Theory

  • Wendy Olsen 2  
  • First Online: 29 July 2022

673 Accesses

Qualitative comparative analysis (QCA) is an umbrella set of methods that use case-study evidence. This chapter begins by describing a case-study research example. The table-making stage is followed by a table-reduction stage. These two stages are similar to or analogous to standard survey data methods. Thus similarities with regression are noted.

This chapter then covers the original scaling, the fuzzy-set membership score, and a Z-score transformation for fuzzy sets. Crisp sets are simple binaries, such as zero/one, while fuzzy sets range from 0 to 1, but both measure the degree of set-membership as a property of an entity. For groups of cases (i.e. of these entities), each permutation of variables and/or contextual conditions is distinct. Multiple similar cases are grouped, and are then known as a configuration.

This chapter defines sufficient cause using a Boolean approach. A dependent variable can be found to occur if-and-only-if some conditions exist (sometimes written iff), but ‘if-and-only-if’ is not the same as sufficient cause. ‘If-and-only-if’ implies that the independent variables are both necessary and sufficient for the outcome. QCA teases these out as different relations (one is a subset and the other is a superset relation). By contrast, regression models require that causes be both necessary and sufficient for the dependent variable, and under that strong definition of causality, the modeller often resists commenting on causal mechanisms at all. In this chapter QCA is offered as a way to address confirmatory causal models directly using empirical evidence, both qualitative and quantitative. A complementary method is to offer an F test of each of the QCA results.

This is a preview of subscription content, log in via an institution .

Buying options

  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Bartholomew, David, Martin Knott and Irini Moustaki (2011) Latent Variable Models and Factor Analysis: A Unified Approach , 3 rd ed., London: Wiley.

Google Scholar  

Baumgartner, Michael, and Alrik Thiem (2017) “Often Trusted but Never (Properly) Tested: Evaluating Qualitative Comparative Analysis”, Sociological Methods & Research, XX(X), 1–33. DOI: https://doi.org/10.1177/0049124117701487 .

Befani, Barbara, Barnett, C., & Stern, Elliot (2014) “Introduction - Rethinking Impact Evaluation for Development”. Ids Bulletin-Institute of Development Studies, 45 (6), 1–5. https://doi.org/10.1111/1759-5436.12108

Bennett, Andrew, and Jeffrey T. Checkel, eds. (2014) Process Tracing , Cambridge: Cambridge University Press.

Bryman, Alan (1996, original 1988) Quantity and Quality in Social Research, London: Routledge.

Byrne, D. (2005) “Complexity, Configuration and Cases”, Theory, Culture and Society 22(10): 95–111.

Byrne, David (1998) Complexity Theory and the Social Sciences: An Introduction , London: Routledge.

Byrne, David (2002) Interpreting Quantitative Data . London: Sage.

Byrne, David, and Ragin, Charles C., eds. (2009) The Sage Handbook of Case-Based Methods, London: Sage Publications.

Caramani, Daniele (2009) Introduction to the Comparative Method With Boolean Algebra , Quantitative Applications in the Social Sciences Series, London: Sage. No. 158.

Cooper B. and Glaesser, J. (2008) “How has educational expansion changed the necessary and sufficient conditions for achieving professional, managerial and technical class positions in Britain? A configurational analysis”, Sociological Research Online . http://www.socresonline.org.uk/13/3/2.html, Accessed June 2013.

Cress, D., and D. Snow (2000) “The Outcome of Homeless Mobilization: the Influence of Organization, Disruption, Political Mediation, and Framing.” American Journal of Sociology 105(4): 1063–1104. URL: http://www.jstor.org/stable/3003888

Cross, Beth, and Helen Cheyne (2018) “Strength-based approaches: a realist evaluation of implementation in maternity services in Scotland”, Journal of Public Health (2018) 26:4, 425–436, https://doi.org/10.1007/s10389-017-0882-4 .

Danermark, Berth, Mats Ekstrom, Liselotte Jakobsen, and Jan Ch. Karlsson, (2002; 1st published 1997 in Swedish language) Explaining Society: Critical Realism in the Social Sciences , London: Routledge.

Dobson, Annette J., and Adrian G. Barnett (2008) An Introduction to Generalized Linear Models , 3rd Ed., Chapman & Hall/CRC Texts in Statistical Science, London: Chapman and Hall/CRC.

Eliason, Scott R., & Robin Stryker (2009) “Goodness-of-Fit Tests and Descriptive Measures in Fuzzy-Set Analysis”, Sociological Methods & Research 38:102–146.

George, Alexander L., and Andrew Bennett (2005) Case Studies and Theory Development in the Social Science s, Cambridge, MA: MIT Press.

Goertz, Gary (2017) Multimethod Research, Causal Mechanisms, and Case-Studies: An Integrated Approach , Princeton: Princeton University Press.

Harriss-White, Barbara, Wendy Olsen, Penny Vera-Sanso and V. Suresh. (2013) “Multiple Shocks and Slum Household Economies in South India.” Economy and Society , 42:3, 398–429.

Hinterleitner, M., Sager, F. and E. Thomann (2016) “The Politics of External Approval: Explaining the IMF’s Evaluation of Austerity Programs”, European Journal of Political Research 55(3): 549–567.

Huang, R. (2011) Qca3: Yet Another Package for Qualitative Comparative Analysis . R package version 0.0–4. Pp 88.

Hughes, John, and Wes Sharrock (1997) The Philosophy of Social Research, 3 rd ed., London: Longman.

Kent, Raymond, Chapter 15 of R. Kent (2007), Marketing Research: Approaches, Methods and Applications in Europe , London: Thomson Learning.

Lam, W. Wai Fung and Elinor Ostrom (2010), “Analyzing the dynamic complexity of development interventions: lessons from an irrigation experiment in Nepal, Policy Science , 43:1, pp. 1–25. https://doi.org/10.1007/s11077-009-9082-6 .

Longest, Kyle C., and Stephen Vaisey (2008). “Fuzzy: A program for Performing Qualitative Comparative Analyses (QCA) in Stata,” Stata Journal , 8 (1):79–104.

Lopez, Jose J. and Scott, John (2000) Concepts in the Social Sciences, Social Structure, Buckingham: Open University Press.

Lucas, Samuel R., and Alisa Szatrowski (2014) Qualitative Comparative Analysis in Critical Perspective, Sociological Methodology 44:1–79.

Mendel, Jerry M., and Mohammad M. Korjani (2012) Charles Ragin's Fuzzy Set Qualitative Comparative Analysis (fsQCA) Used For Linguistic Summarizations, Information Sciences, 202, 1–23 , https://doi.org/10.1016/j.ins.2012.02.039 .

Mendel, Jerry M., and Mohammad M. Korjani (2018), A New Method For Calibrating The Fuzzy Sets Used in fsQCA, Information Sciences , 468, 155–171. https://doi.org/10.1016/j.ins.2018.07.050

Olsen, Wendy (2009) Non-Nested and Nested Cases in a Socio-Economic Village Study, chapter in D. Byrne and C. Ragin, eds. (2009), Handbook of Case-Centred Research Methods , London: Sage.

Olsen, Wendy (2012) Data Collection: Key Trends and Methods in Social Research , London: Sage.

Olsen, Wendy (2014) “Comment: The Usefulness of QCA Under Realist Assumptions”, Sociological Methodology , 44, 10.1177/0081175014542080, July, pgs. 101–107.

Olsen, Wendy (2019) “Bridging to Action Requires Mixed Methods, Not Only Randomised Control Trials”, European Journal of Development Research , 31:2, 139–162, https://link.springer.com/article/10.1057/s41287-019-00201-x .

Pawson, R (2013) The Science of Evaluation: A realist manifesto . London: Sage

Pawson, Ray, and Nick Tilley (1997) Realistic Evaluation . Sage: London.

Ragin, C. C. (1987) The Comparative Method: Moving beyond qualitative and quantitative strategies. Berkeley; Los Angeles; London: University of California Press.

Ragin, C. C. (2000) Fuzzy-Set Social Science. Chicago; London: University of Chicago Press.

Ragin, C., (2009) Reflections on Casing and Case-Oriented Research, pp 522–534 in Byrne, D., and C. Ragin, eds. (2009), Handbook of Case-Centred Research Methods , London: Sage.

Ragin, Charles C. (2006) “Set Relations in Social Research: Evaluating Their Consistency and Coverage. Political Analysis 14(3):291–310.

Ragin, Charles (2008a) Redesigning Social Enquiry: Fuzzy Sets and Beyond, Chicago: University of Chicago Press.

Ragin, Charles, and Sean Davey (2009) User’s Guide to Fuzzy Set / Qualitative Comparative Analysis , based on Version 2.0, assisted by Sarah Ilene Strand and Claude Rubinson. Dept of Sociology, University of Arizona, mimeo. Copyright 1999–2003, Charles Ragin and Kriss Drass; and Copyright © 2004–2008, Charles Ragin and Sean Davey. URL http://www.u.arizona.edu/~cragin/fsQCA/download/fsQCAManual.pdf , accessed 2020.

Ragin, Charles, and Sean Davey (2014) fs/QCA [Computer Programme] , Version [3.0]. Irvine, CA: University of California. The instructions for loading are at: http://www.socsci.uci.edu/~cragin/fsQCA/software.shtml accessed 2020. Users will prefer Version 3.0 or later, and must not load Version 2.0 (held at http://www.u.arizona.edu/~cragin/fsQCA/software.shtml , and dated 2009) unless it is for scholarly reasons of tracing through which distinct formulas and methods of fsQCA gave particular results.

Rihoux, B. (2006) Qualitative Comparative Analysis (QCA) and related systematic comparative methods: recent advances and remaining challenges for social science research. International Sociology , 21(5), 679–706.

Rihoux, B., & Ragin, C. C. (2009) Configurational comparative methods. Qualitative Comparative Analysis (QCA) and Related Techniques (Series in Applied Social Research Methods). Thousand Oaks and London: Sage.

Rihoux, B., and Gisele de Meur (2009) Crisp-Set Qualitative Comparative Analysis (csQCA), in Benoit Rihoux and C. Ragin, eds., 2009, Configurational Comparative Methods: QCA and Related Techniques (Applied Social Research Methods). Thousand Oaks and London: Sage.

Rihoux, B., and M. Grimm, eds. (2006) Innovative Comparative Methods For Policy Analysis: Beyond the quantitative-qualitative divide . New York, NY, Springer.

Röhwer, Götz (2011) Qualitative Comparative Analysis: A Discussion of Interpretations, European Sociological Review 27:6, 728–740, https://doi.org/10.1093/esr/jcq034

Sayer, Andrew (2000) Realism and Social Science . London: Sage.

Schneider, C. Q., & Wagemann, Claudius (2006) Reducing complexity in Qualitative Comparative Analysis (QCA): remote and proximate factors and the consolidation of democracy. European Journal of Political Research, 45 (5), 751–786.

Schneider, C. Q., & Wagemann, Claudius (2012) Set-Theoretic Methods for the Social Sciences: A Guide to Qualitative Comparative Analysis . (Original in German 2007) Cambridge: Cambridge University Press.

Smithson, M. and J. Verkuilen (2006) Fuzzy Set Theory: Applications in the social sciences . Thousand Oaks, London: Sage Publications.

Thiem, Alrik, and Duşa, Adrian, (2013a) Qualitative Comparative Analysis with R: A User’s Guide, London: Springer.

Thiem, Alrik and Adrian Duşa (2013b) QCA: A Package for Qualitative Comparative Analysis, The R Journal, Vol. 5/1, June.

Vatrapu, Ravi, Raghava Rao Mukkamala, Abid Hussain, And Benjamin Flesch (2015) Social Set Analysis: A Set Theoretical Approach to Big Data Analytics, IEEE Access , April 28, 10.1109/ACCESS.2016.2559584.

Vis, Barbara (2009) Governments and unpopular social policy reform: Biting the bullet or steering clear?, European Journal of Political Research , 48: 31–57.

Download references

Author information

Authors and affiliations.

Department of Social Statistics, University of Manchester, Manchester, UK

Wendy Olsen

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Wendy Olsen .

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this chapter

Cite this chapter.

Olsen, W. (2022). Qualitative Comparative Analysis (QCA): A Classic Mixed Method Using Theory. In: Systematic Mixed-Methods Research for Social Scientists. Palgrave Macmillan, Cham. https://doi.org/10.1007/978-3-030-93148-3_6

Download citation

DOI : https://doi.org/10.1007/978-3-030-93148-3_6

Published : 29 July 2022

Publisher Name : Palgrave Macmillan, Cham

Print ISBN : 978-3-030-93147-6

Online ISBN : 978-3-030-93148-3

eBook Packages : Social Sciences Social Sciences (R0)

Share this chapter

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research

This website may not work correctly because your browser is out of date. Please update your browser .

Qualitative comparative analysis

Qualitative Comparative Analysis (QCA) is a means of analysing the causal contribution of different conditions (e.g. aspects of an intervention and the wider context) to an outcome of interest.

QCA starts with the documentation of the different configurations of conditions associated with each case of an observed outcome. These are then subject to a minimisation procedure that identifies the simplest set of conditions that can account for all the observed outcomes, as well as their absence.

The results are typically expressed in statements expressed in ordinary language or as Boolean algebra. For example:

  • A combination of Condition A and condition B or a combination of condition C and condition D will lead to outcome E.
  • In Boolean notation this is expressed more succinctly as A*B + C*D→E

QCA results are able to distinguish various complex forms of causation, including:

  • Configurations of causal conditions, not just single causes. In the example above, there are two different causal configurations, each made up of two conditions.
  • Equifinality, where there is more than one way in which an outcome can happen. In the above example, each additional configuration represents a different causal pathway
  • Causal conditions which are necessary, sufficient, both or neither, plus more complex combinations (known as INUS causes – insufficient but necessary parts of a configuration that is unnecessary but sufficient), which tend to be more common in everyday life. In the example above, no one condition was sufficient or necessary. But each condition is an INUS type cause
  • Asymmetric causes – where the causes of failure may not simply be the absence of the cause of success. In the example above, the configuration associated with the absence of E might have been one like this: A*B*X + C*D*X →e  Here X condition was a sufficient and necessary blocking condition.
  • The relative influence of different individual conditions and causal configurations in a set of cases being examined. In the example above, the first configuration may have been associated with 10 cases where the outcome was E, whereas the second might have been associated with only 5 cases.  Configurations can be evaluated in terms of coverage (the percentage of cases they explain) and consistency (the extent to which a configuration is always associated with a given outcome).

QCA is able to use relatively small and simple data sets. There is no requirement to have enough cases to achieve statistical significance, although ideally there should be enough cases to potentially exhibit all the possible configurations. The latter depends on the number of conditions present. In a 2012 survey of QCA uses the median number of cases was 22 and the median number of conditions was 6.  For each case, the presence or absence of a condition is recorded using nominal data i.e. a 1 or 0. More sophisticated forms of QCA allow the use of “fuzzy sets” i.e. where a condition may be partly present or partly absent, represented by a value of 0.8 or 0.2 for example. Or there may be more than one kind of presence, represented by values of 0, 1, 2 or more for example. Data for a QCA analysis is collated in a simple matrix form, where rows = cases and columns = conditions, with the rightmost column listing the associated outcome for each case, also described in binary form.

QCA is a theory-driven approach, in that the choice of conditions being examined needs to be driven by a prior theory about what matters. The list of conditions may also be revised in the light of the results of the QCA analysis if some configurations are still shown as being associated with a mixture of outcomes. The coding of the presence/absence of a condition also requires an explicit view of that condition and when and where it can be considered present. Dichotomisation of quantitative measures about the incidence of a condition also needs to be carried out with an explicit rationale, and not on an arbitrary basis.

Although QCA was originally developed by Charles Ragin some decades ago it is only in the last decade that its use has become more common amongst evaluators. Articles on its use have appeared in Evaluation and the American Journal of Evaluation.

For a worked example, see Charles Ragin’s What is Qualitative Comparative Analysis (QCA)? ,  slides 6 to 15 on The bare-bones basics of crisp-set QCA.

[A crude summary of the example is presented here]

In his presentation Ragin provides data on 65 countries and their reactions to austerity measures imposed by the IMF. This has been condensed into a Truth Table (shown below), which shows all possible configurations of four different conditions that were thought to affect countries’ responses: the presence or absence of severe austerity, prior mobilisation, corrupt government, rapid price rises. Next to each configuration is data on the outcome associated with that configuration – the numbers of countries experiencing mass protest or not. There are 16 configurations in all, one per row. The rightmost column describes the consistency of each configuration: whether all cases with that configuration have one type of outcome, or a mixed outcome (i.e. some protests and some no protests). Notice that there are also some configurations with no known cases.

qualitative methods case studies and comparative analysis

Ragin’s next step is to improve the consistency of the configurations with mixed consistency. This is done either by rejecting cases within an inconsistent configuration because they are outliers (with exceptional circumstances unlikely to be repeated elsewhere) or by introducing an additional condition (column) that distinguishes between those configurations which did lead to protest and those which did not. In this example, a new condition was introduced that removed the inconsistency, which was described as  “not having a repressive regime”.

The next step involves reducing the number of configurations needed to explain all the outcomes, known as minimisation. Because this is a time-consuming process, this is done by an automated algorithm (aka a computer program) This algorithm takes two configurations at a time and examines if they have the same outcome. If so, and if their configurations are only different in respect to one condition this is deemed to not be an important causal factor and the two configurations are collapsed into one. This process of comparisons is continued, looking at all configurations, including newly collapsed ones, until no further reductions are possible.

[Jumping a few more specific steps] The final result from the minimisation of the above truth table is this configuration:

SA*(PR + PM*GC*NR)

The expression indicates that IMF protest erupts when severe austerity (SA) is combined with either (1) rapid price increases (PR) or (2) the combination of prior mobilization (PM), government corruption (GC), and non-repressive regime (NR).

This slide show from Charles C Ragin, provides a detailed explanation, including examples, that clearly demonstrates the question, 'What is QCA?'

This book, by Schneider and Wagemann, provides a comprehensive overview of the basic principles of set theory to model causality and applications of Qualitative Comparative Analysis (QCA), the most developed form of set-theoretic method, for research ac

This article by Nicolas Legewie provides an introduction to Qualitative Comparative Analysis (QCA). It discusses the method's main principles and advantages, including its concepts.

COMPASSS (Comparative methods for systematic cross-case analysis) is a website that has been designed to develop the use of systematic comparative case analysis  as a research strategy by bringing together scholars and practitioners who share its use as

This paper from Patrick A. Mello focuses on reviewing current applications for use in Qualitative Comparative Analysis (QCA) in order to take stock of what is available and highlight best practice in this area.

Marshall, G. (1998). Qualitative comparative analysis. In A Dictionary of Sociology Retrieved from https://www.encyclopedia.com/social-sciences/dictionaries-thesauruses-pictures-and-press-releases/qualitative-comparative-analysis

Expand to view all resources related to 'Qualitative comparative analysis'

  • An introduction to applied data analysis with qualitative comparative analysis
  • Qualitative comparative analysis: A valuable approach to add to the evaluator’s ‘toolbox’? Lessons from recent applications

'Qualitative comparative analysis' is referenced in:

  • 52 weeks of BetterEvaluation: Week 34 Generalisations from case studies?
  • Week 18: is there a "right" approach to establishing causation in advocacy evaluation?

Framework/Guide

  • Rainbow Framework :  Check the results are consistent with causal contribution
  • Data mining

Back to top

© 2022 BetterEvaluation. All right reserved.

The University of Manchester home

Qualitative comparative analysis

Wendy Olsen, CCSR/Social Statistics.

Video: Wendy Olsen

Qualitative comparative analysis by Wendy Olsen

What is qualitative comparative analysis? by Wendy Olsen

Qualitative Comparative Analysis (QCA) offers a new, systematic way of studying configurations of cases. QCA is used in comparative research and when using case-study research methods. The QCA analysts interprets the data qualitatively whilst also looking at causality between the variables. Thus the two-stage approach to studying causality has a qualitative first stage and a systematic second stage using QCA.

QCA is truly a mixed-methods approach to research. The basic data-handling mechanism is a simple qualitative table of data. This matrix is made up of rows and columns. Its column elements can be binary (yes/no), ordinal, or scaled index variates. QCA is best suited to small- to medium-N case-study projects with between 3 and 250 cases.

Crisp-set QCA uses only binary variates for its truth table. Fuzzy-set QCA also uses ordinal variates. A variate is a column of numbers representing real, not hypothetical, cases. In implementing QCA, one can code up the case-study data using NVIVO 7 software to create substantive case attributes. Multiple-level nested or non-nested cases can be handled. Fuzzy-set analysis is an optional extra stage, which also uses Boolean logic, but which is not necessary for QCA and tends not to be as qualitative as crisp-set QCA (csQCA) itself.

Venn Diagram produced by the 'visualizer' tool

Experts at Manchester

  • Dr Wendy Olsen , Senior Lecturer in Social Science Research Methods (SED) and in Socio-Economic Research (SOSS)
  • Matthias Vom Hau, Brooks World Poverty Institute (now joined with the Institute for Development Policy and Management to become the Global Development Institute)

A research grant from the British Academy allowed Manchester University to host an Expert Roundtable on the Study of Strategies of Social Change using the Method of Qualitative Comparative Analysis (QCA) in 2008. Experts from Manchester University and the UK then visited Japan to hold a second roundtable there in 2009. A mixed-methods research training workshop took place on 15 June, 2010.

Key references

  • Rihoux, B., & Ragin, C. C. (2009). Configurational comparative methods. Qualitative Comparative Analysis (QCA) and related techniques (Applied Social Research Methods). Thousand Oaks and London: Sage.
  • Rihoux, B., and M. Grimm, eds. (2006). Innovative Comparative Methods For Policy Analysis: Beyond the quantitative-qualitative divide. New York, NY, Springer.
  • Ragin, C.C. (2008). Redesigning social inquiry: Set relations in social research. Chicago: Chicago University Press.
  • Ragin, C. C. (2000). Fuzzy-set social science. Chicago; London, University of Chicago Press. (One only needs to read the first half to cover QCA; the second half covers Fuzzy Set Analysis.)
  • Byrne, D., and C. Ragin, eds. (2009), Handbook of Case-Centred Research Methods, London: Sage.
  • QUAL-COMPARE email discussion list  - JISC email list about Qualitative Comparative Analysis
  • Compasss  -  International network promoting small-N and medium-N comparative methods

Wendy Olsen offers software support and advice via both these web sites.

Staff interested in qualitative software Nvivo

  • Dr. Rudolf Sinkovics  - Alliance Manchester Business School (AMBS), International Business
  • Dr. Yanuar Nugroho  - formerly Alliance Manchester Business School (AMBS), Technological innovation
  • Prof Cathy Cassell  - formerly Alliance Manchester Business School (AMBS), Qualitative methods in organisational research
  • Dr. Richard Kyle  - Nursing, Midwifery and Social Work, Health and social geographer
  • Dr. Sarah Kendal  - Nursing, Midwifery and Social Work, Emotional wellbeing interventions
  • Dr. Ziv Amir  - Nursing, Midwifery and Social Work, Survivorship and cancer
  • Dr Linda McGowan  - Nursing, Midwifery and Social Work, Women's health  
  • Prof. Alys Young  - Nursing, Midwifery and Social Work, Social work research
  • Wendy Olsen  - School of Social Sciences, Sociology of economic life
  • Prof. Jennifer Mason  - Social Sciences, Kinship and family

Staff interested in qualitative software Atlas-TI

  • Dr. Jane Griffiths  - Nursing, Midwifery and Social Work, Supportive and Palliative Care in Community Nursing
  • Dr. Malcolm Campbell  - Nursing, Midwifery and Social Work, Statistics
  • Prof. Christi Deaton  - Nursing, Midwifery and Social Work, Structural equation modelling and multi-level modelling
  • Prof. Peter Callery  - Nursing, Midwifery and Social Work, Self care of long term conditions in childhood and young people

Download PDF slides of the presentation ' What is QCA? '

No internet connection.

All search filters on the page have been cleared., your search has been saved..

  • All content
  • Dictionaries
  • Encyclopedias
  • Expert Insights
  • Foundations
  • How-to Guides
  • Journal Articles
  • Little Blue Books
  • Little Green Books
  • Project Planner
  • Tools Directory
  • Sign in to my profile No Name

Not Logged In

  • Sign in Signed in
  • My profile No Name

Not Logged In

Qualitative Comparative Analysis in Political Science: A Study of Media Effects on the Policy Agenda

  • By: Rianne Dekker
  • Product: Sage Research Methods Cases Part 2
  • Publisher: SAGE Publications Ltd
  • Publication year: 2019
  • Online pub date: January 02, 2019
  • Discipline: Political Science and International Relations
  • Methods: Qualitative comparative analysis , Comparative research , Grounded theory
  • DOI: https:// doi. org/10.4135/9781526467553
  • Keywords: agendas , framing , media coverage , media effects , publications Show all Show less
  • Online ISBN: 9781526467553 Copyright: © SAGE Publications Ltd 2019 More information Less information

Qualitative comparative analysis is a relatively new research method in political science and public administration to find patterns in qualitative data in a small to medium-sized set of cases. In my PhD research, I used this method to study under what conditions media coverage for policy issues is associated with changes on the policy agenda. My research focused on the policy agenda of immigration. This contribution outlines reasons to choose qualitative comparative analysis, different types of qualitative comparative analysis, the process of conducting qualitative comparative analysis and lessons learned on benefits and limitations of this method. Qualitative comparative analysis offers advantages when comparing a relatively large number of cases, when testing configurational hypotheses and when assuming a non-linear notion of causality. In my research, I applied the most basic type of crisp set qualitative comparative analysis. Multi-value and fuzzy-set qualitative comparative analysis allow for comparison of case characteristics in more detail. The process of conducting qualitative comparative analysis can be visualized as an hourglass process, starting and ending with the richness of qualitative case data with minimalization of the logical pattern in a comprehensive formula in between. The main benefit of qualitative comparative analysis is supporting the process of systematically comparing complex qualitative cases by keeping focus on the research puzzle at hand. Most important lesson is to not to get caught up in the qualitative comparative analysis technicalities. Qualitative comparative analysis is a means for interpretation of qualitative data and not a goal in itself.

Learning Outcomes

By the end of this case, students should be able to

  • Explain what qualitative comparative analysis is
  • Decide when it is appropriate to use qualitative comparative analysis
  • Distinguish between different types of qualitative comparative analysis
  • Design their own crisp set qualitative comparative analysis study
  • Reflect upon opportunities and limitations of qualitative comparative analysis

Introduction

When conducting qualitative case studies, at some point, you may start feeling overwhelmed by large amounts of rich qualitative data from the various cases you are comparing. Because of the work that has gone into collecting the data, you seem to see only the intricacies of each case, but you have lost sight of the bigger picture and patterns throughout the cases. When you find yourself in such a situation, qualitative comparative analysis (QCA) might offer a solution for you.

QCA is a configurational comparative research approach that allows to systematically compare characteristics of cases and uncover patterns in qualitative data. The method was originally developed in the 1980s by Charles Ragin (1987) . The methodology has developed since and has become common practice in the social sciences in general, and in political science and public administration in particular ( Rihoux, Álamos-Concha, Bol, Marx, & Rezsöhazy, 2013 ). Currently, there are several handbooks available addressing various types of QCA and software packages to support the analysis ( Rihoux & Ragin, 2009 ; Schneider & Wagemann, 2012 ; Thiem & Dusa, 2012 ). There is also an international research community “Compasss” that organizes seminars, manages a bibliographical and data archive, produces publications, and is engaged in software development.

I learned about QCA methodology during my PhD research when I found myself in a situation as described above. I was studying the effects of media coverage on immigration and asylum on the Dutch policy agenda. I had selected 16 cases that had received different amounts and types of media attention. I studied each of the cases in depth by analyzing media coverage on television, in newspapers, opinion magazines, and on social media. I also studied references to specific cases on the national policy agenda in policy memoranda that were sent from government to parliament. Learning about the complexity of each individual case, and dealing with a relatively large variety of 16 cases, made me lose sight of the research puzzle that I was addressing: Under what conditions is media coverage for policy issues associated with changes on the policy agenda?

In the following, I will explain how QCA proved a helpful methodology to make sense of the rich qualitative data and helped me develop an answer to my research question. In the following, I will address several lessons learned when using QCA on when to apply this methodology, different types of QCA, and how QCA is put into practice.

When to Use QCA?

QCA was designed to bridge the gap between in-depth study of several cases and quantitative research focusing on variables in a large number of cases ( Grofman & Schneider, 2009 ). It is a systematic and transparent way of tracing patterns in qualitative data from multiple cases—usually a small to medium-sized set of between 5 and 50 cases ( Rihoux & Ragin, 2009 ), but increasingly also larger numbers of cases. When dealing with these numbers of cases and large amounts of rich qualitative data per case, other comparative case-study methods (cf. Blatter & Haverland, 2012) may not suffice. The number and heterogeneity of cases makes it just too complex.

Furthermore, QCA is an appropriate method when theory implies that no single variables, but configurations of factors will determine a certain outcome and that different combinations of factors may generate this outcome. QCA assumes that certain “configurations” of factors (so-called conditions) are sufficient or necessary for achieving a certain outcome. In the case of my research, theory assumed contingency of media effects on the policy agenda ( Walgrave & Van Aelst, 2006 ; Wolfe, Jones, & Baumgartner, 2013 ).

An additional reason to use QCA might be the notion of causality that underlies this methodology. QCA assumes equifinality and multifinality ( Verweij & Gerrits, 2012 , p. 27). This means taking into account that different conditions can produce similar outcomes, and that the same condition can produce different outcomes in different contexts (or configurations). In the case of my research, I expected that media effects on the policy agenda derive from different aspects of media coverage, including the amount of media attention and the “framing” of the media coverage. Furthermore, it was expected that, for example, large amounts of media attention would not have an effect on the policy agenda in all cases. The theoretical assumption in policy agenda-setting research is that media effects do not result from a linear causal process but entail complex causal interactions between the media and policy agenda ( Boydstun, 2013 ; Wolfe et al., 2013 ). Therefore, a different notion of causality fit my research subject.

Choosing Between Different Types of QCA

There are three basic types of QCA: crisp set QCA (csQCA), multi-value QCA (mvQCA), and fuzzy-set QCA (fsQCA) ( Rihoux & Ragin, 2009 ; Schneider & Wagemann, 2012 ). The simplest and original type of QCA is “crisp set” ( Ragin, 1987 ). As a beginner in using the methodology, it proved helpful to start with this type of QCA. In csQCA, each condition and the outcome is operationalized in a way that it has two possible values: 0 or 1. I, for example, attributed the value 1 to media attention when more than 100 publications in our selection of newspapers, opinion magazines, and TV programs were published within 6 months after the onset of media coverage of the case and 0 when this was not the case (see Table 1 ). A downside to csQCA is that I lost some of the complexity and richness of the data. There was more variety in numbers of publications and framing that I could have used in the analysis. Also, the threshold for inclusion and exclusion should be transparent and justified.

To maintain more richness of the data, I could have used mvQCA or fsQCA. In mvQCA, a condition is attributed multiple values, for example, 1, 2, 3, and 4 ( Vink & Van Vliet, 2009 ; 2013 ; Thiem, 2013 ). MvQCA allows the researcher to deal with non-dichotomous conditions which cannot be easily quantified. For example, in my study, I could have used mvQCA to attribute the dominant media frame (a way of presenting the news) to the cases, distinguishing between a human interest, threat, economic, and managerial frame ( d’Haenens & de Lange, 2001 ; Vliegenthart, 2007 ). MvQCA requires more complex analyses because there are more logically possible configurations. With a relatively small number of cases, there will also be more logical remainders: configurations of conditions that are logically possible but absent in my set of cases.

The last and most advanced type of QCA is fsQCA (Ragin, 2000, 2008). In fsQCA, a condition is quantified on a scale of 0 to 1. By allowing for a large variety of values, the richness of qualitative data can be better preserved in the analyses. However, the operationalization of this richness in quantitative terms, somewhat diminishes the qualitative nature of the method. If your conditions can easily be quantified on a scale of 0 to 1, why not just do a regression analysis estimating relationships between variables ( Vis, 2012 )? The answer lies in the reasons for choosing QCA that I listed in the previous section. First, regression will only produce meaningful results if you have a moderately large- N of cases. Second, there might not be enough cases to include different hypotheses and variables in your model. Finally, there is a theoretical advantage of being able to test set-theoretical hypotheses combining several conditions and assuming a more complex notion of causality.

Research Practice of QCA

The process of QCA can be visualized as an hourglass. First, in-depth qualitative analysis of the cases results in a large and rich body of data on each case. Second, QCA entails a reduction of the complexity by summarizing the cases on relevant conditions and the outcomes and looking for patterns of co-occurrences (configurations). Finally, the relevant configurations are re-interpreted in light of the richness of the qualitative case data. Residual complexity and exceptional “black swan” cases that do not fit the main patterns in the data are assumed to be present and you can use them for more in-depth interpretation of the findings.

QCA starts with a research question based on a theoretical research puzzle and can thus be used to test hypotheses. The method is less suited for a grounded theory approach that focuses on theory development (cf. Glaser & Strauss, 1967 ). QCA involves identifying relevant conditions and testing how certain configurations of conditions are associated with an outcome. Therefore, you need to have a basic idea on the specific case-characteristics that will be relevant. In my PhD research, the focus was the study of media effects on the policy agenda. I wanted to analyze the effects of different aspects of media coverage, including volume of media attention, frame contestation, and frame consonance of the media coverage (see Table 1 and Dekker & Scholten, 2017 , for the specifics).

You can use different rationales for choosing your cases. When the outcome you want to study is known in the cases, you can purposively select a number of cases with and without this outcome. If the outcome is not yet known at the point of case selection or if your analysis has a more explorative nature, you can do a random selection or purposively heterogeneous selection of cases. In case of my research, I selected different types of issues that gained media attention to explore what conditions and outcomes were present. This, for example, included cases of individual immigrants, larger groups of immigrants and policy proposals. In other research, case-selection may not be required: for example, if your cases are the 28 European Union (EU) member states.

Qualitative data that you collect on your cases may include documents, interviews, observations, or a combination of these. Data collection should include all data required to provide you with in-depth knowledge of the cases. These data should be processed (transcribed and/or ordered) and coded on the conditions. It can also include a process of open coding to find new conditions that may be relevant to explaining the outcome.

The next step in QCA is calibration. This involves the synthesis of the data into a “raw data matrix.” All cases are listed in the left column of the table and the conditions and outcome are listed in the top row. Each case is scored on these different conditions. The type of score depends on the chosen type of QCA (csQCA, mvQCA, or fsQCA). In my crisp set analysis, I scored all 16 cases on the different conditions as 0 or 1 (see Table 2 ). As this table demonstrates, in 9 of 16 cases, the policy frame remained the same over 1 year after the onset of media attention for the cases. This was interpreted as no media effect having taken place. In 7 of the 16 cases, the framing of the issue on the policy agenda changed within a period of 1 year after the onset of the issue.

The raw data matrix is then transformed into a so-called “truth table.” The truth table lists all the logically possible combinations of conditions and sorts the cases accordingly. Our truth table has eight logically possible configurations (2^3). Each configuration is presented as a row ( Table 3 ). Two logical configurations were not present in any of the cases.

C: contradictory row; R: Logical remainder.

Following the Boolean logic of QCA, each row in the truth table was “minimized” into a comprehensive formula. The information in the truth table is logically summarized in a set of propositions that holds true for our set of cases ( Table 4 ). This process involves pairwise comparison of the configurations that agree on the outcome and differ in but one other condition ( Ragin, 1987 ). Presence of a condition is indicated by using upper cases, absence of a condition by using lower cases and the logical operators AND and odds ratio (OR) are indicated with * and +. Contradictory configurations are not included in the minimization process, but I listed them as such.

Lower case indicates absence of condition. *: AND; +: OR.

Developing a truth table and the process of minimalization becomes more complex when more conditions are involved and when dealing with mvQCA or fsQCA. In any case, using a specialized software package for QCA will be helpful. Several are available and you can find a good overview on the website of COMPASSS ( Thiem & Dusa, 2012 ; Compasss website). For my csQCA, I used the specialized software package “Tosmana” ( To ol for Sma ll- N A nalysis) ( Cronqvist, 2011 ). For mvQCA and fsQCA, there are other options and an open source software package that works in the R statistical computing environment is advised ( Thiem & Dusa, 2012 ).

The last and a very important stage in QCA is re-interpreting the propositions resulting from the QCA in light of theory and the in-depth qualitative data on your cases ( Schneider & Wagemann, 2010 , 2012 ). You will be addressing the coverage and consistency of the proposition. Coverage expresses how much of the outcome is explained (covered) by a certain condition from the proposition. In my analysis, in all cases corresponding with policy change, the condition of frame contestation was present. This coverage score indicates that frame contestation is a necessary condition in configurations associated with media effects on the policy agenda. The consistency score expresses the contradictions within the formula. A high consistency score indicates a strong pattern. Based on these sensitivity tests, researchers can decide to replace or remove certain conditions, alter the threshold of how the condition or outcome is operationalized or exclude certain cases ( de Block & Vis, 2018 ).

Minimalization is not the end-stage of QCA. Interpreting the patterns based on the rich qualitative data is a must and can also help interpret contradictory patterns in which similar configurations lead to different outcomes. As usual in QCA, there were notable exceptions to the general patterns that were identified in my study. The configurations resulting from QCA were used as a starting point for the re-interpretation of the patterns and to uncover underlying causal mechanisms and to point at avenues for future research.

The “black swan” cases in my research indicated that an issue frame being promoted by a strong coalition of policy stakeholders in the media is an important condition for a media effect to take place. These stakeholders were using media besides other lobby channels to influence the policy agenda. Furthermore, the role of the political “vestibule” to policy change should not be underestimated. Political actors were often present as sources in contesting media coverage. They made issues public via the media to gain support for their policy alternatives. Also, parliamentary debate was often an intermediary step to policy change. Finally, the stability of the government coalition behind the current policies was an important factor in media effects on the policy agenda.

Conclusions and Lessons Learned

Using QCA for analysis of media effects on the policy agenda in 16 cases of media coverage has been a valuable learning process during my PhD research. QCA brings sophistication and transparency to a process that you always go through when comparing qualitative cases: tracing differences and commonalities between the cases on conditions that you deem relevant based on theory. QCA supports this by providing structure and transparency to this process. QCA enables you to more systematically deal with a small to medium-sized set of cases and hypotheses combining various conditions.

CsQCA is a good way to start your QCA experience as it is the most basic form that can even be done without analysis software. A downside to this type of QCA is losing much of the richness of the qualitative data that you collected. Even when you bring this back in when re-interpreting the patterns that you encountered on the basis of this in-depth data, more analytical sophistication can be achieved with mvQCA or fsQCA.

QCA helped me not to lose sight of my research question when interpreting qualitative data on a variety of 16 cases. An important lesson has also been to keep the richness of qualitative case data at the core of QCA and not to get caught up in the QCA technicalities. Bottom line is that QCA should help you to interpret your research data and it should not become a goal in itself.

Exercises and Discussion Questions

  • 1. When is QCA a suitable methodology for case-study research?
  • 2. What types of cases can be used for QCA in the field of political science and public administration?
  • 3. What types of QCA exist and which are suitable for your research data?
  • 4. What are the steps in QCA research?
  • 5. How do you deal with contradictory cases in QCA?
  • 6. Is QCA mainly a qualitative or quantitative research method? And what about fsQCA?

Further Reading

Web resources.

www.compasss.org

Sign in to access this content

Get a 30 day free trial, more like this, sage recommends.

We found other relevant content for you on other Sage platforms.

Have you created a personal profile? Login or create a profile so that you can save clips, playlists and searches

  • Sign in/register

Navigating away from this page will delete your results

Please save your results to "My Self-Assessments" in your profile before navigating away from this page.

Sign in to my profile

Sign up for a free trial and experience all Sage Learning Resources have to offer.

You must have a valid academic email address to sign up.

Get off-campus access

  • View or download all content my institution has access to.

Sign up for a free trial and experience all Sage Research Methods has to offer.

  • view my profile
  • view my lists

qualitative methods case studies and comparative analysis

  • High contrast

Logo UNICEF Innocenti

Comparative Case Studies

Comparative Case Studies: Methodological Briefs - Impact Evaluation No. 9

Publication date: 9

Publication series: Methodological Briefs

No. of pages: 17

COPY BIBLIOGRAPHIC CITATION

Download the report, related topics, more in this series: methodological briefs.

The role of women school principals in improving learning in French-speaking Africa

The role of women school principals in improving learning in French-speaking Africa

Théorie du changement

Théorie du changement

La teoría del cambio

La teoría del cambio

Critères d’évaluation

Critères d’évaluation

Thank you! Please help us to serve your needs better while your PDF downloads:

Academia.edu no longer supports Internet Explorer.

To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to  upgrade your browser .

Enter the email address you signed up with and we'll email you a reset link.

  • We're Hiring!
  • Help Center

paper cover thumbnail

Case Studies and Comparative Methods for Qualitative Research

Profile image of Eleanor Knott

This course focuses on how to design and conduct small-n case study and comparative research. Thinking outside of students' areas of interest and specialisms and topics, students will be encouraged to develop the concepts and comparative frameworks that underpin these phenomena. In other words, students will begin to develop their research topics as cases of something. The course covers questions of design and methods of case study research, from single-n to small-n case studies including discussions of process tracing and Mill's methods. The course addresses both the theoretical and methodological discussions that underpin research design as well as the practical questions of how to conduct case study research, including gathering, assessing and using evidence. Examples from the fields of comparative politics, IR, development studies, sociology and European studies will be used throughout the lectures and seminars.

Related Papers

Dr. BABOUCARR NJIE

qualitative methods case studies and comparative analysis

American Journal of Qualitative Research

Nikhil Chandra Shil, FCMA

A. Biba Rebolj

Khullar Junior

Although case study methods remain a controversial approach to data collection, they are widely recognised in many social science studies especially when in-depth explanations of a social behaviour are sought after. This article, therefore, discusses several aspects of case studies as a research method. These include the design and categories of case studies and how their robustness can be achieved. It also explores on the advantages and disadvantages of case study as a research method.

special issue: the first-year experience

Fabian Frenzel

David E Gray

The Journal of Agricultural Sciences - Sri Lanka

rohitha rosairo

We receive a large number of manuscripts for possible publication in this journal. In reviewing them, we find that the bulk of them are from the area of crop sciences, livestock production and allied fields that have used experiments as the research strategy. The minority that falls in to the areas of agribusiness, agricultural economics and extension have used survey strategy. There is a lack of utilizing other research strategies in current research. Research has to be commenced with a clear direction and a clearly identified study process. These are primarily provided by its research strategy (Wedawatta, 2011). There are numerous strategies that a researcher can adopt to achieve the objectives of a particular research study. Some common research strategies are; experiment, survey, archival analysis, ethnography, action research, narrative inquiry, and the case study. This paper explains what a case study is and outlines the components of a case study. The Nature of a Case Study Yin (2003) defines case study as 'an empirical inquiry that investigates a contemporary phenomenon within its real-life context, especially when the boundaries between phenomenon and context are not clearly evident'. A phenomenon and context are not always distinguishable in real-life situations. Therefore, a case study uses a large number of variables of interest than data points; and essentially relies on multiple sources of evidence for data triangulation. A historical viewpoint on case study strategy is presented in Tellis (1997). According to Yin (2003), case studies can be exploratory, explanatory or descriptive. Research in social sciences deals with interactions between institutions and human behaviour. These can be best studied in real-life settings and contexts. Sometimes an inquiry may be undertaken on an individual organization with a limited or a narrow population. These suggest qualitative investigations which are assessments of attitudes, opinions and behaviour (Kothari and Garg, 2018). These qualitative investigations are characteristic with the case study strategy. Whilst often been identified as interpretivist, case studies can also be used in positivistic research (Saunders et al. 2012).

Lesley Bartlett

EmmsMt Ntuli

Lesley Bartlett , Frances Vavrus

What is a case study and what is it good for? In this article, we argue for a new approach—the comparative case study approach—that attends simultaneously to macro, meso, and micro dimensions of case-based research. The approach engages two logics of comparison: first, the more common compare and contrast; and second, a 'tracing across' sites or scales. As we explicate our approach, we also contrast it to traditional case study research. We contend that new approaches are necessitated by conceptual shifts in the social sciences, specifically in relation to culture, context, space, place, and comparison itself. We propose that comparative case studies should attend to three axes: horizontal, vertical, and transversal comparison. We conclude by arguing that this revision has the potential to strengthen and enhance case study research in Comparative and International Education, clarifying the unique contributions of qualitative research.

RELATED PAPERS

Annals of Botany

Tetsuto Abe

Batı'nın "Müslüman Sorunu": 21. Yüzyılda Batı'da Müslüman Varlığı ve Yüzleşmeler

Ersin Doyran

48th AIAA Plasmadynamics and Lasers Conference

Stanislav Gordeyev

Journal of Child and Adolescent Behaviour

Farha Adiba

Journal of Computer-Mediated Communication

Alexander Gross

CNS Oncology

Saurabh Ray

D. Maria II, princesa do Brasil, rainha de Portugal​ ​Arte, Património e Identidade

Francisca Veiga

Jurnal Riset Teknologi Industri

Umi Hanifah

Dossier historique et patrimonial, in : Bulletin des Amis des Monuments et Sites de l'Eure, Cahiers Jacques Charles, coll. « Monuments et sites de l'Eure », mars, p. 11-50

epaud frederic

Journal of Corporate Finance

C. William Keevil

The Scientific World JOURNAL

Manish Thakur

ACI Structural Journal

Aníbal Costa

Seismological Research Letters

Dmitry Storchak

International Energy Journal

roslan abdullah

African American Studies Center

Zebulon V Miletsky

Strength of Materials and Theory of Structures

Mykola Pidhorodetskyi

Gaceta de Museos

Denise Hellion

Archives of Disease in Childhood

roy JEDEIKIN

Philippiniana Sacra

Regalado Jose

Francesco Di Girolamo

Acta Materialia

Carlos Tome

加拿大《Lakehead文凭》 湖首大学毕业证|

Radiation and Environmental Biophysics

Evgenia Ostroumova

RELATED TOPICS

  •   We're Hiring!
  •   Help Center
  • Find new research papers in:
  • Health Sciences
  • Earth Sciences
  • Cognitive Science
  • Mathematics
  • Computer Science
  • Academia ©2024
  • ReadingLists@UCL Help

Browse Hierarchy PUBL0086: PUBL0086: Qualitative Methods: Case Studies and Comparative Analysis

Lists linked to publ0086: qualitative methods: case studies and comparative analysis, add list to this module, add existing node.

  • Utility Menu

University Logo

f031a16c48b8db1278f2d0b9458ba048

Center for geographic analysis.

Center for Geographic Analysis

  • A Comparative Analysis of Methods for Drive Time Estimation on Geospatial Big Data: A case study in U.S.

Recent Publications by CGA Affiliates

  • Bridging the Gap: A Case Study of Integrating Social Media Big Data with Geography
  • A scoping review of COVID-19 research adopting quantitative geographical methods in geography, urban studies, and planning: a text mining approach
  • Epidemic effects in the diffusion of emerging digital technologies: evidence from artificial intelligence adoption
  • Leveraging the digital layer: the strength of weak and strong ties in bridging geographic and cognitive distances
  • Geospatial Analytics Extension for KNIME

Similarities Across Qualitative Data Analysis Methods    The...

Similarities Across Qualitative Data Analysis Methods 

The majority of qualitative researchers value the importance of being simultaneously involved in both data collection and data analysis (Braun & Clarke, 2013; Charmaz, 2006; Creswell, 2007; Maxwell, 2013; Miles, Huberman, & Saldaña, 2014; Silverman, 2014). In the words of Miles, Huberman, and Saldaña (2014, p. 70), this simultaneous involvement permits a "healthy corrective for built-in blind spots," resulting in not only a richer analysis but a more compelling one, if the analyst uses the first waves of data collection to verify, shape, and further build their understanding of the dataset. Perhaps the most notable similarity across the majority of qualitative analysis methods is the identification of themes, patterns, processes, and/or profiles (Creswell, 2007; Coffey & Atkinson, 1996; Dey, 1993; Miles, Huberman, & Saldaña, 2014; Seidman, 2006). This is achieved by searching for patterns or regularities across the data, which is most typically done by comparing and contrasting the data segments and thus delineating the overarching themes, patterns, and/or processes (Flick, 2009). For example, in their seminal book, Glaser and Strauss (1967) present the constant comparative method of qualitative data analysis, which combines the explicit coding procedures of hypothesis-testing type approaches with the practices of theory-generating approaches. Although qualitative data analysis methods may vary in the exact tactics for identifying trends in the data, this feature is nearly always present. To make sense of overarching patterns in a dataset, many qualitative researchers advocate creating thematic maps, matrices, and/or networks (i.e., figurative or tabular representation of analysis; Braun & Clarke, 2013; Corbin & Strauss, 2015; Dey, 1993; Flick, 2009; Maxwell, 2013; Miles, Huberman, & Saldaña, 2014; Wolcott, 1994). By displaying data in easily accessible maps or networks, the analyst not only organises all the information, but they can also examine the overall picture, discern how categories and concepts are related, and draw conclusions. Creating a conceptual framework from existing literature is another common feature across qualitative data analysis approaches (Maxwell, 2013; Miles, Huberman, & Saldaña, 2014). Such conceptual frameworks are constructed, rather than simply being found in an existing study, meaning that the researcher has to analyse and synthesise this previous information, thus laying the foundation for the data collection and analysis (Miles, Huberman, & Saldaña, 2014). The process of elaborating these different kinds of displays inherently involves analysis and interpretation thus facilitating meaningmaking. One of the most widely used tactics across all qualitative research is the practice of coding. Codes are essentially short descriptive or inferential labels that are assigned to data segments in order to condense and categorize the dataset (Miles, Huberman, & Saldaña, 2014; Saldaña 2013). There are diverse coding methods put forth by various qualitative methodologists, and qualitative researchers often either choose the coding methods appropriate for their study or they follow the recommended coding methods of their given methodology. Although terminology may differ according to the different approaches, there are certainly some parallels: open coding in grounded theory, for example, is akin to identifying significant statements in phenomenology, which is likewise similar to categorical development in case study research (Creswell, 2007). A common "end point" of coding is theoretical saturation—reaching the point at which no new knowledge is generated (Braun & Clarke, 2013; Denzin & Lincoln, 2005; Flick, 2009). Once the data has sufficiently "saturated" the analyst's theoretical understanding, the researcher may proceed to map out the descriptions and relations of each category, draw (tentative) conclusions, and verify these conclusions to ensure they represent the data and provide meaningful interpretations for answering the research question(s).

Differences Across Qualitative Data Analysis Methods 

The process of qualitative data analysis is in constant flux: no two methodologies are carried out in the exact same way, as each study and corpus of data are unique (Miles, Huberman, & Saldaña, 2014; Patton, 2002). The literature review, for example, is a part of nearly any research project, but when and how it is carried out can differ. For example, grounded theorists aim to generate a theory inductively, so the literature review is delayed until the researcher has developed a conceptual analysis of their data (Charmaz 2006; Corbin & Strauss, 2015; Glaser & Strauss, 1967; Strauss & Corbin, 1990). Other analytic methods, in contrast, integrate the literature review from the very beginning, often advising an iterative approach to reading the literature and analysing the data, so that the developing analysis can be informed by and contrasted to existing research (Braun & Clarke, 2013; Miles, Huberman, & Saldaña, 2014). Along these lines, qualitative analysis methods can be distinguished by their focus on inductive or deductive analyses (Creswell, 2007; Given, 2008; Miles, Huberman, & Saldaña, 2014). Researchers can use deductive strategies to discern the extent to which their data supports or contends current theoretical or conceptual knowledge. Thus, deductive analyses are commonly used to "test" theories. Inductive strategies can be used to analyse data "from the ground up," and they are commonly used to "generate" theories. Many qualitative studies use some form of inductive analysis (Yin, 2011), but perhaps the most notable is the grounded theory approach to qualitative data analysis (Glaser & Strauss, 1967). In reality, both deductive and inductive strategies may be adequately combined to facilitate a foundational understanding of the topic (via a literature review, for example) whilst allowing new, unanticipated information to emerge from the dataset. The process of writing memos may also differ across qualitative analysis methods. Memoing can encompass the researcher's process of making sense of the data through reflexive notes, analytic ideas, and documentation of the developing research. Thus, memos often form an integral part of the qualitative analysis process (Braun & Clarke, 2013; Given, 2008; Glaser & Strauss, 1967; Miles, Huberman, & Saldaña, 2014). Beyond this basis, the types and functions of memos may greatly vary. For example, researchers could use initial memoing during open coding to help conceptualize incidents, followed by theoretical memoing to transfer between substantive codes and theoretical codes (Glaser, 2005). Alternatively, "stand-alone" memos can be used for different specific purposes, such as the research diary, team work memo, idea memo, code memo, theory and literature memo, and research questions memo (Friese, 2014). Instead of using memos for organizing a project, others suggest using purely analytic memos (Charmaz, 2006; Saldaña, 2013; Tracy, 2013). In some cases, memos are dedicated to the development of emergent categories (Charmaz, 2006), while in other cases, memos are used from the very beginning of data collection all the way through the verification of the conclusions (Miles, Huberman, & Saldaña, 2014). While memo-writing is inherent to most qualitative studies, each researcher can adapt different memoing strategies according to their methodology and research aims. Perhaps the most notable difference across qualitative data analysis methods comes down to how the data are coded. Codes can be used to identify recurring patterns, organise the chunks of data that go together, and trigger deeper reflection on the data's meaning. The actual process of coding data, however, can be as varied as the data itself. Some qualitative methodologies provide clear coding guidelines, for which grounded theory is particularly distinguished (Corbin & Strauss, 2015; Glaser, 2005; Glaser & Strauss, 1967; Strauss, 1987; Strauss & Corbin, 1990), while other methodologies leave the coding methods much more open-ended, as in the example of thematic analysis (Braun & Clarke, 2013). Saldaña (2013) outlines up to 29 different coding methods in his cornerstone manual, many of which can be compatibly mixed and matched, so analysts may choose which ones will help them answer their research questions (Saldaña, 2013). Charmaz (2006) suggests coding line-by-line, in order to focus the researcher's attention on the data and keep an open mind to any emerging nuances. Gibbs (2007) advocates systematic comparison of codes to develop more interpretative, rather than descriptive, analyses. Miles and Huberman follow Saldaña's approach, whereby coding is divided into two main stages: First Cycle codes are those that are initially assigned to the data, while Second Cycle codes build on these initial codes and group them into meaningful categories, themes, or constructs (Miles, Huberman, & Saldaña, 2014). Ultimately, the coding methods may depend on the research questions and nature of the study, which is why this is one of the most variable points across qualitative data analysis. Finally, qualitative data analysis methods can differ in their applicability to those who are new to qualitative research. Given the interpretative nature of qualitative analysis, certain methods are generally easier to grasp for novices while others are considerably more complex and thus better understood by more experienced qualitative researchers. Researchers today are making efforts to outline methods more suitable for novice qualitative researchers. Silverman (2014), for example, offers a thorough review of different qualitative analysis methods, including content analysis, grounded theory, and narrative analysis, along with exercises to help students apply the different principles. However, the rich breadth and openness of the approach can be paradoxically challenging for students—or "potential" researchers—who are simply seeking guidance on how to approach their qualitative research (Kalekin-Fishman, 2001). Wolcott (1994), on the other hand, gives advice for teaching qualitative analysis, although his practical advice focuses more on teaching ethnographic analyses to graduate students, thus providing more depth but less breadth in regard to qualitative data analysis across research areas. Dey (1993) puts forth a pragmatic guide for students—explaining the iterative spiral of qualitative data analysis through collecting, describing, classifying, and connecting data—and he focuses on applying this to qualitative data analysis software. Braun and Clarke (2013) have developed a practical guidebook that walks researchers through the processes of thematic analysis, interpretative phenomenological analysis, and pattern-based discourse analysis, as these are very common practices that are likewise relatively accessible to those who are new to qualitative research. They particularly focus on thematic analysis in guiding new researchers through their first try at qualitative data analysis, although they do also outline other methods that require more advanced skills in order to give the reader a sense of the wider scope and diversity of qualitative analysis, including methods of discursive psychology, conversation analysis, and narrative analysis. Gibbs (2007) provides practical information for students on how to deal with textual data, and he provides an effective overview of basic analytic processes which are organised in five steps: data preparation, data extension, coding, comparing, and writing a report. Miles, Huberman, and Saldaña's (2014) approach involves three main stages—data condensation, data display, and conclusion drawing and verification—and the authors likewise take the time to point out the coding and analytic strategies that are better suited for novice researchers, such as In Vivo coding and Initial coding. Given the breadth of possible approaches to qualitative analysis for novices, we set out to provide guidelines that integrate the advice of multiple qualitative research experts into an easily understandable model.

Proposing a New Model for Qualitative Data Analysis 

The qualitative data analysis model outlined here was developed to teach students how to carry out qualitative data analysis by following a series of iterative cycles that synthesize the main tactics found across qualitative approaches suitable for novice researchers. As a result, this model can provide a strong foundation for almost any qualitative research, but it is also reduced to the most essential points, thus making it relatively easier to grasp. Once students have actually experienced analysis, they can then "take a step back" and reflect on the analysis to meaningfully develop understanding; the next time they embark upon a qualitative research project, they will already have a clearer idea of what to expect and how to go about their analysis. Our goal is to give students more confidence and knowledge and thus be able to make better-informed methodological and analytic decisions in the future. In keeping with the majority of qualitative researchers, we view the simultaneous involvement in both data collection and data analysis as fundamental for the analyst to develop their understanding of the data and continue collecting meaningful data in order to effectively answer their research questions (Braun & Clarke, 2013; Charmaz, 2006; Creswell, 2007; Maxwell, 2013; Miles, Huberman, & Saldaña, 2014). In other words, analysis begins as soon as data collection starts. Our model focuses on the analysis of this data, guiding novice qualitative researchers through the process of making sense of their data. The approach is inductive-deductive, following the elaboration of a conceptual framework based on a comprehensive literature review that guides subsequent data collection and analysis but still leaves space for unanticipated information to emerge. We believe that beginning with inductive analyses is important because we want to encourage novices to immerse themselves in their data with an open mind and to consider various possible interpretations and theoretical directions, rather than concentrating on what they found in the literature. The subsequent deductive analyses, then, foster the novices' sense-making of the dataset as they contrast their initial analyses with previous studies. We value the combination of both inductive and deductive strategies, because it provides an approach that is comprehensive yet manageable for new qualitative researchers: conducting a purely deductive study can limit the researcher's ability to identify rich and unanticipated findings, yet, on the other hand, conducting a purely inductive study can be intimidating for novice researchers. Moreover, teaching students to think in polarising dichotomies runs the risk of boxing students into different "camps" (Silverman, 2014), but by showing that both approaches are valuable for analysing qualitative data, the richness of the discipline can be more adequately appreciated. Since the "craft" of qualitative research is best learned through hands-on practice, a combined inductive-deductive approach seems ideal for permitting novice qualitative researchers to experience both classic approaches to qualitative data analysis.

The practice of writing memos forms an integral part of this qualitative data analysis approach, because we have found it to be effective for encouraging students to engage in reflexive and critical thinking. In order to provide guidelines for novice researchers, we suggest three types of memos that can be applied to most qualitative research projects: the research diary, the methodological memo, and the analytical memo. The research diary would be the primary memo for reflections, thinking critically about the work, and tracking the development of the research. This memo can help students clarify their assumptions, personal responses, and decision-making about their study. The reflexive thinking developed throughout the research diary is a part of the "quality control" in qualitative research (Braun & Clarke, 2013). Since the process of writing memos is relatively abstract, students can be hesitant to engage in memo-writing. Given this, we also provide practical suggestions of what to include in the different types of memos to help students get started. For example, the research diary can be used to describe and reflect on what has been done on a day-to-day basis, keep a "to do" list, and outline a strategic plan for the short-term and long-term development of ideas. Students can also keep an account of important facts (such as people the student met, literature they read, or lectures they attended) and notes from discussions or useful conversations. The research diary can also be used to suggest ways to move forward on certain problems, write ideas or questions to follow up on, brainstorm, and develop personal views and analyses throughout the research project. In order to help maintain the empirical value of qualitative research, we encourage students to transparently describe their methodological decisions (Tracy, 2013; Yin, 2011). Students can create methodological memos to elaborate on their particular approach to the study. Moreover, the process of writing everything out can help cement their understanding of qualitative methodology. On the other hand, we find that students often refer to their methodological memos as a reminder of the steps they need follow as they proceed through their analysis. These memos can ultimately serve as a reference point to guide a consistent and coherent development of the research. While the methodological decisions depend on the type of study being carried out, some important things to include in any methodological memo could be the development of the conceptual framework, analytical processes, and theoretical approaches. Students can also elaborate on how they collected data, analysed data, which coding cycles they used, and how they identified relations between codes. Methodological memos can also be used to record any methodological or analytical dilemmas that may arise, as well as to document any deliberate or unexpected changes that occurred throughout the project. Analytical memos can be used for elaborating the in-depth analysis of the data and going beyond explicit descriptions. Given that much of qualitative data analysis is developed through the actual process of writing, analytical memos provide a strong starting point from which a "rough draft" of the analysis can be developed. Moreover, analytical memos can be powerful for documenting and grounding the analysis in the data (through direct links to data segments) and providing an audit trail of the evolving analysis. To give more concrete advice to students, we suggest that analytical memos can be used as a space to reflect on and write about the study's research questions and objectives, write ideas and analyses of what the information is reflecting or "telling" in the context of the research question, and how the researcher relates to the phenomenon at hand and the participants (or other data collected). Analytical memos can also be used to elaborate on emergent patterns, categories, themes, concepts, and assertions throughout the analysis, as well as possible network links (such as link relations, conceptions, and flows) among the codes. Students may also discuss any problems or limitations that arise during the analysis and any possible future directions for the study. The use of memos outlined here is linked to maintaining transparency, coherency, and communicability through a systematic documentation of the researcher's developing work (Auerbach & Silverstein, 2003). On the other hand, when it comes to writing-up the final paper, the majority of the content often comes directly from the memos. Our aim is to provide a basic approach to memoing from which each student may adapt their own working style. These memos can serve as powerful reference points for students throughout their project, so they may develop their understanding of their data.

Cycle The Inspection Cycle is the first inductive approach to the data, whereby the student begins familiarising themselves with the dataset through preliminary quantitative content analyses and initial phases of auto-coding. They can thus quantify and reflect on the contents of their data. Qualitative content analysis is a classic procedure for reducing and analysing a wide variety of textual data (Flick, 2009; Krippendorff, 2004; Mayring, 2004), and it is helpful for answering "why" questions, whereas quantitative content analyses are helpful for answering "what" questions (Given, 2008). Since the focus of this cycle is familiarization with the data, we encourage students to search for the "what's" of their data, instead of being overly concerned with analysing latent or interpretative meanings just yet. While content analysis is sometimes criticised for being marked by ideals of quantitative methodology (Flick, 2009), we feel that this is an effective preliminary analytic procedure for novice qualitative researchers, as students are often more familiar with quantitative methods due to the prevalence of quantitative research methods courses in social science programs (Forrester & Koutsopoulou, 2008; Mitchell et al., 2007). Moreover, by permitting students to gain hands-on experience with both quantitative and qualitative analysis techniques, both approaches can then be compared and contrasted in order to understand the strengths and limitations of each within qualitative research. Quantitative analyses can be easily carried out using online tools, text processing programs (such as Microsoft Word), and computer-assisted qualitative data analysis programs (CAQDAS). Many CAQDAS likewise include auto-coding features, which permit students to quickly and easily code their data according to the concepts identified in their quantitative analyses. For those interested in using software during the whole analysis process, we have found CAQDAS to be perfectly capable of meeting the needs of this model. The Inspection Cycle thus incorporates the practice of basic quantitative content analysis in order to encourage students to identify possibly relevant concepts from their data, and we then contrast this analysis with more interpretative and qualitative analyses of the subsequent analysis cycles.

Coding Cycle 

The Coding Cycle is where the researcher begins to analyse their data in-depth—they now stop and think about each data segment and take their time in exploring possible interpretations. Students thus begin to significantly condense their data; although data condensation is an inherent part of the entire research process (including data collection and transcription), the Coding Cycle emphasises the practice of selecting, focusing, simplifying, abstracting, and/or transforming the data that appear in the full corpus of information (Miles, Huberman, & Saldaña, 2014). Among the great variety of qualitative data coding methods, we identified those that are relatively easier for novice qualitative researchers to adopt and that can likewise be applied to a range of qualitative methodologies. As Saldaña (2013) points out, analysts need to decide which coding methods will be necessary for answering their research questions, and the different methods can often be compatibly mixed and matched. However, discerning which coding methods to use among the plethora of possibilities can be overwhelming for beginning qualitative researchers, so we developed our model to guide students through common coding practices. We also teach students about the goals of theoretical saturation—reaching the point at which no new knowledge is generated— as a general indicator of when the data has been sufficiently coded (Braun & Clarke, 2013; Denzin & Lincoln, 2005; Flick, 2009). By breaking the Coding Cycle down into a series of methods, students also learn that qualitative research is a cyclical and iterative process—a common misconception among students is that all coding can be conducted with one reading of the data. Coding is not a one-off operation, but rather involves multiple readings and reconsiderations of the data and the actual codes being developed. The first step is pre-coding, which involves circling, highlighting, bolding, underlining, or colouring rich or significant segments of the data that capture the students' interest (Saldaña, 2013). In other words, the student identifies the "codable moments" worthy of attention (Boyatzis, 1998). The aim is for students to explore the data and gain a global understanding by marking the passages of interest and reflecting in a memo why that passage captured their attention. Students therefore do not begin by directly coding their data, so they can also learn that qualitative data analysis consists of much more than simply assigning. codes to data segments (Coffey & Atkinson, 1996). As this is the first full read-through of the data, we want students to remain open-minded to different possible interpretations and focus only on the content of the data. We advise that students write a memo for each data segment they mark in order to get them used to engaging in reflexive thinking; moreover, by taking the time to write about each passage, students can slow down and take the time to develop their understanding of the data. The second step is Initial coding. This coding cycle involves coding the data according to any emergent information identified in the data segments. Initial coding often ranges across a variety of topics, and it can encourage the analyst to remain open to all possible theoretical directions (Charmaz, 2006; Corbin & Strauss, 2015; Glaser, 2005; Glaser & Strauss, 1967; Miles, Huberman, & Saldaña, 2014; Saldaña, 2013). We suggest beginning with open-ended analyses such as Initial coding to encourage students' full immersion in the dataset; this is where students may begin to reflect deeply on the contents and nuances of the data (Saldaña, 2013). Instead of focusing on how the data compares to the literature, we want students to pay attention to what is going on in their data and start coding this inductive information. Students may also create In Vivo codes to capture concepts or phrases from the participants. Initial coding is prevalent in a vast array of qualitative analysis methods, because it implies the first major process of coding which identifies specific, relevant segments of data and can help provide analytic leads that the researcher may further explore (Saldaña, 2013). Initial coding was originally referred to as "Open coding" in grounded theory publications, but Charmaz (2006) coined the term "Initial coding" to convey that this is a starting step from which the rest of the analysis will continue; this open-ended coding process is also described in more general terms in Braun and Clarke's (2013) guide for novice qualitative researchers. Moreover, Initial coding has been recognized as particularly well-suited for beginning qualitative researchers who are learning to code data (Miles, Huberman, & Saldaña, 2014; Saldaña, 2013). The third step is Elaborative coding, whereby students begin to deductively analyse their data. This coding cycle is based on a "start list" of codes that is elaborated prior to collecting and analysing the data; this start list stems from each student's literature review and elaborated theoretical framework. This coding cycle is carried out in this "top-down" fashion, whereby the relevant segments of data are analysed according to the previouslyidentified concepts, and students can thus build on or corroborate existing research (Auerbach & Silverstein, 2003; Miles, Huberman, & Saldaña, 2014; Saldaña, 2013). As this is also the students' first experience with deductive approaches to analysis, this step helps teach students how their "start list" of codes can later be modified, deleted, or expanded as the analysis progresses. This process also shows students the importance of coherence in qualitative research—harmonising the analysis of previous literature with the data analysis in order to answer the research questions (Auerbach & Silverstein, 2003; Saldaña, 2013). Whereas the first two coding cycles focused on analysing only the information present in the data, this coding cycle re-examines the data only for information related to the concepts and dimensions that were identified from the literature review. This step is valuable for linking the students' conceptual framework to their analysis, thus illustrating one of the ways in which conceptual frameworks can help researchers make sense of the developing "story" of their data (Maxwell, 2013; Miles, Huberman, & Saldaña, 2014).

Categorization Cycle 

The Categorization Cycle consists of developing a categorical or thematic organisation of the code list—revising the codes created thus far and identifying the overarching categories or themes. This is similar to Saldaña's (2013) Second Cycle coding process, whereby the First Cycle codes are analysed and grouped into meaningful categories, themes, or constructs. Students may thus structure their code list to reflect their developing analysis: at this stage it is common to rename, merge, split, or delete codes. Rather than introducing too many complex coding methods, we have students focus on grouping together their codes and begin elaborating the possible categories of their data analysis; this can be done by comparing, grouping, and mapping codes in displays (Braun & Clarke, 2013; Gibbs, 2007; Miles, Huberman, & Saldaña, 2014; Saldaña, 2013). These are essentially adaptations of Saldaña's (2013) Pattern and Axial coding, but we find it easier for students to grasp the process of categorisation by working with their networks and reshaping them to more adequately tell the story of the dataset. The first step is Focused coding, which effectively bridges both the inductive and deductive codes created thus far. Once the data has been coded for initial impressions and previously identified concepts, this coding cycle involves searching for thematic or conceptual similarity among the data by focusing on the codes themselves (Charmaz, 2006; Saldaña, 2013). Students now group together their different inductive and deductive codes into possible categories by looking at their conceptual framework, code frequencies, and the different elements that are most meaningful for answering their research questions (Braun & Clarke, 2013). For example, it is common that some of the code names become category names. At this point, we have students read through the dataset again, but this time, they examine how the data "fits" each of their developing categories. In other words, students recode their dataset by focusing only on the codes that form part of their first category; they then repeat the process with the codes from their second category, and so on. On the other hand, students will have generated a relatively large list of codes by now, so it is important that they read through the dataset again in order to ensure the consistency of their coding; for example, a concept identified in the data of the last interview may actually also appear in the data of the first interview. Thus, once the code list has been revised, the dataset needs to be revisited to ensure these codes are consistently applied. Focused coding also encourages students to begin exploring possible themes from their data in a way that does not solely focus on the most frequently occurring codes; rather, students may also focus on the different dimensions of their conceptual framework in order to explore to what extent they "fit" the data and thus begin identifying possible adjustments that need to be made to the conceptual framework. In order to help students begin drawing the overarching connections across their dataset, the Categorisation Cycle emphasises the importance of revisiting the conceptual framework and contrasting it to the analysis carried out thus far. We advocate the practice of displaying data as an inherent part of analysis and sense-making, in line with Miles, Huberman, and Saldaña's (2014) approach. While students have been involved in mapping out networks from the beginning of their project, the Categorisation Cycle foregrounds the data display process. Students may make adjustments to their framework at this point, to include new codes, exclude irrelevant codes, and modify the relations among them. The revision of the code list and mapping out the work in the conceptual framework allows students to develop their "meta-thinking" skills to identify the overarching themes, patterns, or categories from their dataset (Miles, Huberman, & Saldaña, 2014). Moreover, the dimensions of the categories are conceptually and operationally defined, and the relations between these dimensions are defined. Many qualitative researchers value the process of mapping themes or creating networks of categories or concepts in order to make sense of the overarching patterns of a dataset (Braun & Clarke, 2013; Corbin & Strauss, 2015; Flick, 2009; Maxwell, 2013; Miles, Huberman, & Saldaña, 2014). This phase involves solidifying the conceptual network and making it explicitly understandable; by defining each aspect, students begin focusing on the elaboration of distinct categories or themes. The conceptual framework is crucial for helping students keep their research focused (and thus avoid an overload of possibly irrelevant concepts), but this framework is malleable and evolving throughout the analysis. Students thus begin to crystallise their framework by clearly distinguishing the different dimensions of the categories as well as how these different categories are related to one another. Students would also work closely with their conceptual framework to incorporate the emergent findings from the data analysis with the information gathered from the literature review, thus bringing together both the inductive and deductive analyses.

Modelling Cycle 

The Modelling Cycle implies the final elaboration of the conceptual framework that has now been corroborated with the empirical analysis. This final framework thus provides a comprehensive picture of the research, and the students can examine this framework to verify that it represents the data accurately and tells a compelling story about the analysis and findings. At this point, the students may read through their entire dataset again, now with their tentative conclusions in mind, and verify that these conclusions tell a valid and compelling story about their data (Miles, Huberman, & Saldaña, 2014). Moreover, it is also important to consider the resulting analysis in light of the literature, which oftentimes necessitates looking through previous sources again to examine how they may support or refute the findings. This shows students, once again, that qualitative research is not a linear process, but rather an iterative approach to making sense of this rich information. Finally, when it comes to writing up and presenting their qualitative research, students use their final conceptual frameworks to guide the flow and presentation of the material.

Using the above article, to compare and contrast some of the main forms of qualitative data analysis.

Provide in-text citation to back up your answer.

Answer & Explanation

The qualitative data analysis model outlined in the article incorporates various forms of qualitative analysis, each serving distinct purposes within the research process. Content analysis, highlighted in the Inspection Cycle, provides a preliminary approach for researchers to familiarize themselves with the dataset through quantitative content analysis, focusing on identifying and quantifying the contents of the data without delving deeply into interpretative meanings. This method, suitable for answering "what" questions about the data, allows researchers to gain an initial understanding of the dataset's content (Flick, 2009; Mayring, 2004; Given, 2008). In contrast, the Coding Cycle introduces both inductive and deductive coding approaches. Inductive coding, particularly initial coding, encourages open-ended exploration of the data without predetermined categories or themes, allowing for the emergence of new insights and patterns (Charmaz, 2006; Saldaña, 2013). Conversely, elaborative coding involves deductive analysis based on a predetermined conceptual framework derived from the literature review, focusing on coding data according to pre-established concepts and categories (Auerbach & Silverstein, 2003; Saldaña, 2013). The Categorization Cycle emphasizes thematic analysis, where researchers organize codes into meaningful categories or themes to identify patterns and connections within the data, facilitating the development of a coherent narrative (Braun & Clarke, 2013; Saldaña, 2013). While not explicitly mentioned, the iterative nature of the qualitative data analysis model reflects elements of grounded theory, which involves generating theories directly from the data through constant comparison and theoretical sampling (Charmaz, 2006; Glaser & Strauss, 1967). Overall, these various forms of qualitative analysis provide researchers with a comprehensive toolkit for exploring and making sense of qualitative data, from initial exploration to theory-building and interpretation.

Approach to solving the question:

The qualitative data analysis model presented in the article incorporates different approaches, including content analysis, inductive and deductive coding, thematic categorization, and iterative theory-building. Content analysis focuses on quantifying data content to gain initial insights. Inductive coding allows for open-ended exploration of data, while deductive coding applies predetermined concepts. 

Detailed explanation:

Thematic categorization organizes codes into meaningful themes, facilitating narrative development. These methods enable researchers to systematically analyze qualitative data, from initial exploration to theory development, providing a comprehensive framework for qualitative analysis (Flick, 2009; Charmaz, 2006; Saldaña, 2013; Braun & Clarke, 2013).

For example, in the qualitative study examining the transition to online learning during the COVID-19 pandemic, researchers utilize various forms of qualitative data analysis to comprehensively understand the experiences of students. They start with content analysis, quantifying challenges mentioned by students such as technical issues or social isolation. Moving to inductive coding, researchers immerse themselves in the data, identifying emergent themes like adaptation strategies or emotional well-being. Deductive coding follows, as researchers categorize data based on predetermined concepts from existing literature, such as coping mechanisms or technology acceptance. Thematic categorization then organizes these codes into broader themes like educational outcomes or social support. Throughout this process, researchers engage in iterative theory-building, refining their conceptual framework based on data analysis to develop a comprehensive understanding of the phenomenon. This integrated approach to qualitative data analysis allows for a nuanced exploration of student experiences, capturing both the breadth and depth of their transition to online learning.

Key references: https://getthematic.com/insights/coding-qualitative-data/

Related Q&A

  • Q Please help for the Business Style question from below     . QUESTION 1 What is an advantage of concise business message... Answered over 90d ago
  • Q  . What did art represent to the literati under Mongol rule? O art was used to glorify the new Mongol rulers O art was u... Answered over 90d ago
  • Q More often, framework and methodology are used synonymously. Though both are an important part of project management, th... Answered over 90d ago
  • Q Part 2: Phylogenetic Trees Once scientists learned how to sequence DNA, that became a great resource for determining rel... Answered 17d ago
  • Q  . Suppose Bourbon House restaurant is considering whether to (1) bake bread for its restaurant in-house or (2) buy the ... Answered over 90d ago
  • Q The presentation should briefly summarize the important points made in the chapter Oral Language Development In infancy,... Answered over 90d ago
  • Q 3) (Restaurant work scheduling problem). The famous Y. S. Chang Restaurant is open 24 hours a day. Waiters and busboys r... Answered over 90d ago
  • Q   Write a persuasive speech advocating for tougher penalties to be put in place to curb dangerous driving with the promp... Answered over 90d ago
  • Q In "the Strange Case of Dr. Jekyll and Mr. Hyde" who is the protagonist? How do you know that this character is the "chi... Answered over 90d ago
  • Q 27 Suppose the unoptimised version of the union-find data structure (with Θ(log n) running times) is given the following... Answered over 90d ago
  • Q Develop a complete  focused soap note with ROS, PE, diagnostics,  differential, assessment, plan for this patient.    A.... Answered over 90d ago
  • Q Database and spreadsheet (Excel) skills are highly sought after in the field. At first, these can seem very hard to unde... Answered over 90d ago
  • Q I need help understanding this passage  . 286 Outsiders nance out of his estate after his death, until they become espou... Answered over 90d ago
  • Q 1. Why is packaging an important part of marketing logistics? Outline two types of packaging material used in product pa... Answered over 90d ago
  • Q 1. There are various cultural and religious requirements in relation to food.  These include specific community needs.  ... Answered over 90d ago
  • Q hello i need help the brand we choose is chanel and the product we choose is chanel perfumes and each part should be in ... Answered over 90d ago

IMAGES

  1. qualitative methods case studies and comparative analysis

    qualitative methods case studies and comparative analysis

  2. qualitative methods case studies and comparative analysis

    qualitative methods case studies and comparative analysis

  3. qualitative methods case studies and comparative analysis

    qualitative methods case studies and comparative analysis

  4. Qualitative Research: Definition, Types, Methods and Examples

    qualitative methods case studies and comparative analysis

  5. Understanding Qualitative Research: An In-Depth Study Guide

    qualitative methods case studies and comparative analysis

  6. 3: Visual representation of the process of comparative cross case

    qualitative methods case studies and comparative analysis

VIDEO

  1. What is the Difference between Quantitative and Qualitative Research?

  2. Hydrogeophysics

  3. Quantitative Approach

  4. 10 Difference Between Qualitative and Quantitative Research (With Table)

  5. Qualitative Approach

  6. Methods and Approaches in Comparative Political Analysis Imp Ques BA hons Pol Science 2nd Semester

COMMENTS

  1. Using qualitative comparative analysis to understand and quantify translation and implementation

    Qualitative comparative analysis (QCA) is a method and analytical approach that can advance implementation science. QCA offers an approach for rigorously conducting translational and implementation research limited by a small number of cases.

  2. Approaches to Qualitative Comparative Analysis and good practices: A

    The Qualitative Comparative Analysis (QCA) methodology has evolved remarkably in social science research. Simultaneously, the use of QCA too often lags behind methodological recommendations of good practice. Improper use is a serious obstacle for QCA to enrich the social science ‑methodology toolkit.

  3. Case Study Methodology of Qualitative Research: Key Attributes and

    A case study is one of the most commonly used methodologies of social research. This article attempts to look into the various dimensions of a case study research strategy, the different epistemological strands which determine the particular case study type and approach adopted in the field, discusses the factors which can enhance the effectiveness of a case study research, and the debate ...

  4. PDF Qualitative Comparative Analysis

    Qualitative Comparative Analysis (QCA) is a case-based method that enables evaluators to systematically compare cases, identifying key factors which are responsible for the success of an intervention. As a comparative method, QCA doesn't work with a single case - it needs to compare factors at work across

  5. The "qualitative" in qualitative comparative analysis (QCA): research

    Qualitative Comparative Analysis (QCA) is a configurational comparative research approach and method for the social sciences based on set-theory. It was introduced in crisp-set form by Ragin ( 1987) and later expanded to fuzzy sets (Ragin 2000; 2008a; Rihoux and Ragin 2009; Schneider and Wagemann 2012 ).

  6. Case selection and causal inferences in qualitative comparative

    Case-selection and qualitative comparisons. Methodological advice on the selection of cases in qualitative research stands in a long tradition. John Stuart Mill in his A System of Logic, first published in 1843, proposed five methods meant to enable researchers to make causal inferences: the method of agreement, the method of difference, the double method of agreement and difference, the ...

  7. Qualitative Comparative Analysis

    QCA underscores that causality is complex, characterized by three prin-ciples: 1) conjunction, referring to the notion that multiple, independent causal attributes jointly produce an outcome; 2) equifinality, suggesting that diferent combinations of conditions yield the same outcome; and 3) asymmetry, pertaining to the possibility that both pres...

  8. Qualitative Comparative Analysis (QCA)

    The social sciences use a wide range of research methods and techniques ranging from experiments to techniques which analyze observational data such as statistical techniques, qualitative text analytic techniques, ethnographies, and many others. In the 1980s a new technique emerged, named Qualitative Comparative Analysis (QCA), which aimed to ...

  9. Qualitative Comparative Analysis: A Hybrid Method for Identifying

    Qualitative comparative analysis (QCA) was developed over 25 years ago to bridge the qualitative and quantitative research gap. Upon searching PubMed and the Journal of Mixed Methods Research, this review identified 30 original research studies that utilized QCA.Perceptions that QCA is complex and provides few relative advantages over other methods may be limiting QCA adoption.

  10. Qualitative Comparative Analysis (QCA): A Classic Mixed Method Using

    Qualitative comparative analysis (QCA) and fuzzy set causality analysis are comparative case-oriented research approaches. The basic technique was developed by Charles Ragin, with key presentations by Rihoux and Ragin (), Thiem and Duşa (2013a, b), Schneider and Wagemann (), and Rihoux and DeMeur ().A case-study comparative analysis will have a series of cases, all somewhat similar or ...

  11. Qualitative comparative analysis

    Qualitative Comparative Analysis (QCA) is a means of analysing the causal contribution of different conditions (e.g. aspects of an intervention and the wider context) to an outcome of interest. QCA starts with the documentation of the different configurations of conditions associated with each case of an observed outcome.

  12. An overview of qualitative comparative analysis: A bibliometric

    The study is structured as follows: ' Qualitative Comparative Analysis ' section reviews the evolution of QCA; ' ' section provides a bibliometric analysis of the use, application, main authors, and impact of QCA; ' ' section discusses the main conclusions and suggests future directions of research on QCA. Qualitative comparative analysis

  13. (PDF) Comparative Case Studies: An Innovative Approach

    I adopted a comparative case study approach (Bartlett and Vavrus 2017) to analyse and synthesise similarities, differences, and patterns of vulnerability in food crop farming and farmers'...

  14. Qualitative comparative analysis

    Qualitative Comparative Analysis (QCA) offers a new, systematic way of studying configurations of cases. QCA is used in comparative research and when using case-study research methods. The QCA analysts interprets the data qualitatively whilst also looking at causality between the variables. Thus the two-stage approach to studying causality has ...

  15. Qualitative comparative analysis in educational policy research

    Qualitative comparative analysis (QCA) is a method of qualitative research that we argue can help to answer these kinds of questions in studies of educational policies and reforms. QCA is a case-oriented research method designed to identify causal relationships between variables and a particular outcome ( Fiss, 2010 ; Ragin, 2008 ).

  16. Qualitative comparative analysis as a method for project studies: The

    1. Introduction Most of the empirical research currently undertaken on energy infrastructure phenomena either uses the qualitative 'case-study' analysis of 1-10 cases (e.g. Ref. [ [1], [2], [3] ]) or the quantitative analysis of databases of 50+ cases (e.g. Ref. [ [4], [5], [6], [7] ]).

  17. Combining the Case Survey Method and Qualitative Comparative Analysis

    The case survey method and qualitative comparative analysis (QCA) are two well-established research approaches in a number of research disciplines. They have recently made their way into...

  18. Qualitative Comparative Analysis in Critical Perspective

    Abstract. Qualitative comparative analysis (QCA) appears to offer a systematic means for case-oriented analysis. The method not only offers to provide a standardized procedure for qualitative research but also serves, to some, as an instantiation of deterministic methods.

  19. Quantifying the Qualitative: Information Theory for Comparative Case

    Quantifying the Qualitative: Information Theory for Comparative Case Analysis ... education, and ecology, demonstrate the powerful application of information metrics to comparative case studies. Presentation of techniques that can be used broadly allows readers to apply what they learn in settings including business, finance, health care ...

  20. Sage Research Methods Cases Part 2

    Qualitative comparative analysis is a relatively new research method in political science and public administration to find patterns in qualitative data in a small to medium-sized set of cases. In my PhD research, I used this method to study under what conditions media coverage for policy issues is associated with changes on the policy agenda.

  21. Comparative Case Studies: Methodological Briefs

    Comparative case studies involve the analysis and synthesis of the similarities, differences and patterns across two or more cases that share a common focus or goal in a way that produces knowledge that is easier to generalize about causal questions - how and why particular programmes or policies work or fail to work.

  22. Case Studies and Comparative Methods for Qualitative Research

    The course covers questions of design and methods of case study research, from single-n to small-n case studies including discussions of process tracing and Mill's methods.

  23. PUBL0086: PUBL0086: Qualitative Methods: Case Studies and Comparative

    Browse Hierarchy PUBL0086: PUBL0086: Qualitative Methods: Case Studies and Comparative Analysis Back to POLSC_SHS: Political Science Lists linked to PUBL0086: Qualitative Methods: Case Studies and Comparative Analysis

  24. A Comparative Analysis of Methods for Drive Time Estimation on

    Bridging the Gap: A Case Study of Integrating Social Media Big Data with Geography; A scoping review of COVID-19 research adopting quantitative geographical methods in geography, urban studies, and planning: a text mining approach; A Comparative Analysis of Methods for Drive Time Estimation on Geospatial Big Data: A case study in U.S.

  25. Similarities Across Qualitative Data Analysis Methods The

    For example, in their seminal book, Glaser and Strauss (1967) present the constant comparative method of qualitative data analysis, which combines the explicit coding procedures of hypothesis-testing type approaches with the practices of theory-generating approaches.