QuestionsAnswered.net

What's Your Question?

What Is a Case Study?

When you’re performing research as part of your job or for a school assignment, you’ll probably come across case studies that help you to learn more about the topic at hand. But what is a case study and why are they helpful? Read on to learn all about case studies.

Deep Dive into a Topic

At face value, a case study is a deep dive into a topic. Case studies can be found in many fields, particularly across the social sciences and medicine. When you conduct a case study, you create a body of research based on an inquiry and related data from analysis of a group, individual or controlled research environment.

As a researcher, you can benefit from the analysis of case studies similar to inquiries you’re currently studying. Researchers often rely on case studies to answer questions that basic information and standard diagnostics cannot address.

Study a Pattern

One of the main objectives of a case study is to find a pattern that answers whatever the initial inquiry seeks to find. This might be a question about why college students are prone to certain eating habits or what mental health problems afflict house fire survivors. The researcher then collects data, either through observation or data research, and starts connecting the dots to find underlying behaviors or impacts of the sample group’s behavior.

Gather Evidence

During the study period, the researcher gathers evidence to back the observed patterns and future claims that’ll be derived from the data. Since case studies are usually presented in the professional environment, it’s not enough to simply have a theory and observational notes to back up a claim. Instead, the researcher must provide evidence to support the body of study and the resulting conclusions.

Present Findings

As the study progresses, the researcher develops a solid case to present to peers or a governing body. Case study presentation is important because it legitimizes the body of research and opens the findings to a broader analysis that may end up drawing a conclusion that’s more true to the data than what one or two researchers might establish. The presentation might be formal or casual, depending on the case study itself.

Draw Conclusions

Once the body of research is established, it’s time to draw conclusions from the case study. As with all social sciences studies, conclusions from one researcher shouldn’t necessarily be taken as gospel, but they’re helpful for advancing the body of knowledge in a given field. For that purpose, they’re an invaluable way of gathering new material and presenting ideas that others in the field can learn from and expand upon.

MORE FROM QUESTIONSANSWERED.NET

95 confidence interval case control study

95 confidence interval case control study

Confidence Intervals

Learn More sidebar

All Modules

More Resources sidebar

Z Score Table

t Score Table

C. Confidence Intervals for the Odds Ratio

In case-control studies it is not possible to estimate a relative risk, because the denominators of the exposure groups are not known with a case-control sampling strategy. Nevertheless, one can compute an odds ratio, which is a similar relative measure of effect. 6 (For a more detailed explanation of the case-control design, see the module on case-control studies in Introduction to Epidemiology). 

Consider the following hypothetical study of the association between pesticide exposure and breast cancer in a population of 6, 647 people. If data were available on all subjects in the population the the distribution of disease and exposure might look like this:

If we had such data on all subjects, we would know the total number of exposed and non-exposed subjects, and within each exposure group we would know the number of diseased and non-disease people, so we could calculate the risk ratio. In this case RR = (7/1,007) / (6/5,640) = 6.52, suggesting that those who had the risk factor (exposure) had 6.5 times the risk of getting the disease compared to those without the risk factor.

However, suppose the investigators planned to determine exposure status by having blood samples analyzed for DDT concentrations, but they only had enough funding for a small pilot study with about 80 subjects in total. The problem, of course, is that the outcome is rare, and if they took a random sample of 80 subjects, there might not be any diseased people in the sample. To get around this problem, case-control studies use an alternative sampling strategy: the investigators find an adequate sample of cases from the source population, and determine the distribution of exposure among these "cases". The investigators then take a sample of non-diseased people in order to estimate the exposure distribution in the total population. As a result, in the hypothetical scenario for DDT and breast cancer the investigators might try to enroll all of the available cases and 67 non-diseased subjects, i.e., 80 in total since that is all they can afford. After the blood samples were analyzed, the results might look like this:

With this sampling approach we can no longer compute the probability of disease in each exposure group, because we just took a sample of the non-diseased subjects, so we no longer have the denominators in the last column. In other words, we don't know the exposure distribution for the entire source population. However, the small control sample of non-diseased subjects gives us a way to estimate the exposure distribution in the source population. So, we can't compute the probability of disease in each exposure group, but we can compute the odds of disease in the exposed subjects and the odds of disease in the unexposed subjects.

return to top | previous page | next page

Medical Epidemiology, 4e

Disclaimer: These citations have been automatically generated based on the information we have and it may not be 100% accurate. Please consult the latest official manual style if you have any questions regarding the format accuracy.

Download citation file:

Jump to a Section

An approximate 95%  confidence interval ( CI ) around the point estimate of the odds ratio ( OR ) for an unmatched case–control study can be calculated using the following formula:

where exp is the base of the natural logarithm raised to the quantity in the brackets, and A, B, C, and D represent the numerical entries into the summary format in Table 9–4 . This confidence interval is approximate because it is based on a computational short-cut to estimating the variance of the natural logarithm of the OR. For relatively large sample sizes, this approximation yields confidence bounds that are quite close to the exact values, which are much more difficult to calculate.

For the data in Table 9–5 relating l -tryptophan brand use to eosinophilia-myalgia syndrome in an unmatched case–control study, the 95% CI was calculated as follows:

Similarly, an approximate 95% CI around the point estimate of an OR from a pair-matched case–control study can be calculated using the following formula:

where exp is the base of the natural logarithm raised to the quantity in the brackets, and X and Y represent the numerical entries in the summary format in Table 9–6 .

For the data in Table 9–7 relating l -tryptophan use to eosinophilia-myalgia syndrome in a hypothetical pair-matched case–control study, the 95% CI was calculated as follows:

Your Access profile is currently affiliated with '[InstitutionA]' and is in the process of switching affiliations to '[InstitutionB]'. Please click ‘Continue’ to continue the affiliation switch, otherwise click ‘Cancel’ to cancel signing in.

Get Free Access Through Your Institution

Pop-up div successfully displayed.

This div only appears when the trigger link is hovered over. Otherwise it is hidden from view.

Please Wait

U.S. flag

An official website of the United States government

The .gov means it's official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you're on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.

StatPearls [Internet]. Treasure Island (FL): StatPearls Publishing; 2023 Jan-.

Cover of StatPearls

StatPearls [Internet].

Steven Tenny ; Mary R. Hoffman .

Affiliations

Last Update: May 22, 2023 .

The odds ratio (OR) is a measure of how strongly an event is associated with exposure. The odds ratio is a ratio of two sets of odds: the odds of the event occurring in an exposed group versus the odds of the event occurring in a non-exposed group.  Odds ratios commonly are used to report case-control studies. The odds ratio helps identify how likely an exposure is to lead to a specific event. The larger the odds ratio, the higher odds that the event will occur with exposure.  Odds ratios smaller than one imply the event has fewer odds of happening with the exposure. [1] [2] [3]

Odds Ratio = (odds of the event in the exposed group) / (odds of the event in the non-exposed group)

If the data is set up in a 2 x 2 table as shown in the figure then the odds ratio is (a/b) / (c/d) = ad/bc. The following is an example to demonstrate calculating the odds ratio (OR).

If we have a hypothetical group of smokers (exposed) and non-smokers (not exposed), then we can look for the rate of lung cancer (event).  If 17 smokers have lung cancer, 83 smokers do not have lung cancer, one non-smoker has lung cancer, and 99 non-smokers do not have lung cancer, the odds ratio is calculated as follows.  

First, we calculate the odds in the exposed group. 

Next we calculate the odds for the non-exposed group. 

Finally we can calculate the odds ratio. 

Thus using the odds ratio, this hypothetical group of smokers has 20 times the odds of having lung cancer than non-smokers.  The question then arises: is this significant? 

Odds Ratio Confidence Interval

To answer if this finding is significant, the confidence interval is calculated.  The confidence interval gives an expected range for the true odds ratio for the population to fall within.  If estimating the odds of lung cancer in smokers versus non-smokers of the general population based on a smaller sample, the true population odds ratio may be different than the odds ratio found in the sample.  In order to calculate the confidence interval, the alpha, or our level of significance, is specified. An alpha of 0.05 means the confidence interval is 95% (1 – alpha) the true odds ratio of the overall population is within range. A 95% confidence is traditionally chosen in the medical literature (but other confidence intervals can be used). The following formula is used for a 95% confidence interval (CI). 

Where 'e' is the mathematical constant for the natural log, 'ln' is the natural log, 'OR' is the odds ratio calculated, 'sqrt' is the square root function and a, b, c and d are the values from the 2 x 2 table.  Calculating the 95% confidence interval for our previous hypothetical population we get: 

Upper 95% CI =e ^ [ln(OR) + 1.96 sqrt(1/a + 1/b + 1/c + 1/d)] =e ^ [ln(20.5) + 1.96 sqrt(1/17 + 1/83 + 1/1 + 1/99] =e ^ [3.02 + 1.96 sqrt(1.08)] =e ^ [3.02 + 1.96 (1.04)] =e ^ [3.02 + 2.04] =e ^ [5.06] = 158 

Lower 95% CI =

e ^ [ln(OR) - 1.96 sqrt(1/a + 1/b + 1/c + 1/d)] =e ^ [ln(20.5) - 1.96 sqrt(1/17 + 1/83 + 1/1 + 1/99)] =e ^ [3.02 - 1.96 sqrt(1.08)] =e ^ [3.02 - 1.96 (1.04)] =e ^ [3.02 - 2.04] =e ^ [0.98] = 2.7 

Thus the odds ratio in this example is 20.5 with a 95% confidence interval of [2.7, 158].  (Note: If no rounding is performed when doing the above calculations, the odds ratio is 20.28 with 95% CI of [2.64, 155.6] which is fairly close to the rounded calculations.)

Confidence Interval Interpretation

If the confidence interval for the odds ratio includes the number 1 then the calculated odds ratio would not be considered statistically significant. This can be seen from the interpretation of the odds ratio. An odds ratio greater than 1 implies there are greater odds of the event happening in the exposed versus the non-exposed group. An odds ratio of less than 1 implies the odds of the event happening in the exposed group are less than in the non-exposed group. An odds ratio of exactly 1 means the odds of the event happening are the exact same in the exposed versus the non-exposed group. Thus, if the confidence interval includes 1 (eg, [0.01, 2], [0.99, 1.01], or [0.99, 100] all include one in the confidence interval), then the expected true population odds ratio may be above or below 1, so it is uncertain whether the exposure increases or decreases the odds of the event happening with our specified level of confidence.  [1]  

The odds ratio can be confused with relative risk.  As stated above, the odds ratio is a ratio of 2 odds. As odds of an event are always positive, the odds ratio is always positive and ranges from zero to very large. The relative risk is a ratio of probabilities of the event occurring in all exposed individuals versus the event occurring in all non-exposed individuals. In a 2-by-2 table with cells a, b, c, and d (see figure), the odds ratio is odds of the event in the exposure group (a/b) divided by the odds of the event in the control or non-exposure group (c/d). Thus the odds ratio is (a/b) / (c/d) which simplifies to ad/bc.  This is compared to the relative risk which is (a / (a+b)) / (c / (c+d)).  If the disease condition (event) is rare, then the odds ratio and relative risk may be comparable, but the odds ratio will overestimate the risk if the disease is more common.  In such cases, the odds ratio should be avoided, and the relative risk will be a more accurate estimation of risk.  [3]   Commonly, odds ratios will be reported in case-control studies, in which relative risks cannot be calculated.

The relative risk for the above hypothetical example of smokers versus non-smokers developing lung cancer is calculated as: 

Thus in our example, the odds ratio is 20.5 (smokers have 20 times the odds of having lung cancer than non-smoker); whereas the relative risk is 17 (smokers have 17 times the relative risk to have lung cancer than non-smokers).

The odds ratio is the ratio of the odds of the event happening in an exposed group versus a non-exposed group. The odds ratio is commonly used to report the strength of association between exposure and an event.  The larger the odds ratio, the more likely the event is to be found with exposure. The smaller the odds ratio is than 1, the less likely the event is to be found with exposure. It is important to look at the confidence interval for the odds ratio, and if the odds ratio confidence interval includes 1, then the odds ratio did not reach statistical significance. [4]

2x2 table with calculations for the odds ratio and 95% confidence interval for the odds ratio. Contributed by Steven Tenny MD, MPH, MBA

Disclosure: Steven Tenny declares no relevant financial relationships with ineligible companies.

Disclosure: Mary Hoffman declares no relevant financial relationships with ineligible companies.

This book is distributed under the terms of the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0) ( http://creativecommons.org/licenses/by-nc-nd/4.0/ ), which permits others to distribute the work, provided that the article is not altered or used commercially. You are not required to obtain permission to distribute this article, provided that you credit the author and journal.

In this Page

Bulk download.

Related information

Similar articles in PubMed

Recent Activity

Your browsing activity is empty.

Activity recording is turned off.

Turn recording back on

Connect with NLM

National Library of Medicine 8600 Rockville Pike Bethesda, MD 20894

Web Policies FOIA HHS Vulnerability Disclosure

Help Accessibility Careers

statistics

Software for attributable risk and confidence interval estimation in case-control studies

Affiliation.

The increasing interest in obtaining model-based estimates of attributable risk (AR) and corresponding confidence intervals, in particular when more than one risk factor and/or several confounding factors are jointly considered, led us to develop a program based on the procedure described by Benichou and Gail for case-control data. This program is structured as an SAS-macro. It is suited to analysis of the relationship between risk factors and disease in case-control studies with simple random sampling of controls, in terms of relative risks and ARs, by means of unconditional logistic regression analysis. The variance of the AR is obtained by the delta method and is based on three components, namely, (i) the variance-covariance matrix of the vector of the estimated probabilities of belonging to joint levels of the exposure and confounding factors conditional on being a case, (ii) the variance-covariance matrix of the odds ratio parameter estimates from the logistic model, and (iii) the covariances between these probability and parameter estimates. Only a limited number of commands is requested from the user (i.e., the name of the work file and the names of the variables considered). The estimated relative risks for all the factors included in the model, the attributable risk for the exposure factor under consideration, and the corresponding 95% confidence intervals are given as outputs by the macro. Computational problems, if any, may arise for large numbers of covariates because of the resulting large size of vectors and matrices. The macro was tested for reliability and consistency on published data sets of case-control studies.

Publication types

Comparison of approaches to estimate confidence intervals of post-test probabilities of diagnostic test results in a nested case-control study

BMC Medical Research Methodology volume  12 , Article number:  166 ( 2012 ) Cite this article

5920 Accesses

17 Citations

1 Altmetric

Metrics details

Nested case–control studies become increasingly popular as they can be very efficient for quantifying the diagnostic accuracy of costly or invasive tests or (bio)markers. However, they do not allow for direct estimation of the test’s predictive values or post-test probabilities, let alone for their confidence intervals (CIs). Correct estimates of the predictive values itself can easily be obtained using a simple correction by the (inverse) sampling fractions of the cases and controls. But using this correction to estimate the corresponding standard error (SE), falsely increases the number of patients that are actually studied, yielding too small CIs. We compared different approaches for estimating the SE and thus CI of predictive values or post-test probabilities of diagnostic test results in a nested case–control study.

We created datasets based on a large, previously published diagnostic study on 2 different tests (D-dimer test and calf difference test) with a nested case–control design. We compared six different approaches; the approaches were: 1. the standard formula for the SE of a proportion, 2. adaptation of the standard formula with the sampling fraction, 3. A bootstrap procedure, 4. A approach, which uses the sensitivity, the specificity and the prevalence, 5. Weighted logistic regression, and 6. Approach 4 on the log odds scale. The approaches were compared with respect to coverage of the CI and CI-width.

The bootstrap procedure (approach 3) showed good coverage and relatively small CI widths. Approaches 4 and 6 showed some undercoverage, particularly for the D-dimer test with frequent positive results (positive results around 70%). Approaches 1, 2 and 5 showed clear overcoverage at low prevalences of 0.05 and 0.1 in the cohorts for all case–control ratios.

The results from our study suggest that a bootstrap procedure is necessary to assess the confidence interval for the predictive values or post-test probabilities of diagnostic tests results in studies using a nested case–control design.

Peer Review reports

An essential step in the evaluation process of a (new) diagnostic test is to assess the diagnostic accuracy measures [ 1 – 4 ]. Traditionally the sensitivity and specificity are studied but another important measure is the predictive value, i.e. the absolute probability that the disease is present or absent given the test result, so-called post-test probability [ 5 ]. Typically, diagnostic accuracy studies use a cross-sectional design in a series or cohort of patients that is defined by the suspicion of the target disease under study. This suspicion is usually defined by the presented symptoms or signs. All patients then undergo the index (e.g. new) tests and subsequently the prevailing reference test or standard [ 5 , 6 ]. Subsequently the predictive values or post-test probabilities of the test results, as well as the sensitivity and specificity can be estimated.

An efficient alternative for this full cohort design is the nested case–control design, in which the controls and cases are sampled from a pre-defined cohort [ 5 – 8 ]. This design is particularly advantageous for diagnostic research purposes when the prevalence of the disease is rare, when the index test is costly or difficult to perform, and when using stored (e.g. biological) material from existing cohorts or biobanks [ 5 – 7 , 9 ]. Limitations, strengths and rationale of the nested case–control design are extensively discussed in the literature, mostly for etiologic research [ 8 , 10 , 11 ], but also recently for the evaluation of diagnostic tests [ 5 , 6 , 9 ].

As an important aim in diagnostic research is to estimate the absolute probability of having the disease given test results (predictive values or post-test probability), the nested character of the design in a cohort with known size is essential. In non-nested or regular case–control studies, controls are sampled from a source population with unknown size. The prevalence of the disease and hence the predictive values can thus not simply be estimated [ 5 , 6 ]. Only relative probabilities, like the odds ratio, can directly be estimated. However, absolute disease probabilities can be estimated, if cases and controls are sampled from an existing, pre-defined cohort, by weighing with the inverse sampling fraction [ 5 ].

For example, consider a full-cohort approach in which the index test result and reference test results are assessed for all patients. Say the index test is an expensive dichotomous biomarker (genomic) measurement requiring human material that is frozen for all cohort members in a biobank. The positive predictive value (PPV) of the marker result is a a + b , and the negative predictive value (NPV) d c + d (Figure 1 , Table A, see legend of Figure 1 for explanation of variable names).

figure 1

Theoretical example of case–control sample, nested in a cohort with known size and prevalence of the disease. ( A = true positive, B = false positive, C = false negative, D = true negative, N = number of patients).

In a nested case–control design, one samples from the full cohort (commonly) the human material of all subjects with a positive reference test (cases), but only a fraction (see cell b1 and d1, Figure 1 , Table C) of those with a negative reference test (controls). The expensive index test is thus only retrieved or measured in the human material of the sampled cases and controls.

In contrast to the typical case–control design, in this nested design the absolute disease probabilities can be calculated by weighing the denominator with the inverse sampling fraction: the PPV = a a + 1 sampling fraction · b 1 , and the NPV = d 1 · 1 sampling fraction c + d 1 · 1 sampling fraction , with sampling fraction b 1 + d 1 b + d (Figure 1 , Table C). For example, the PPV and NPV from the full study are 30/(30+100)=0.23 and 300/(10+300)=0.97 (Figure 1 , Table B). Applying the approaches for the nested case–control sample with only 10% of all non-diseased patients yields the same results. Sampling fraction = (10+30)/(100+300)=0.1, PPV=30/(30+(10 · 10)) = 0.23, NPV = (30 · 10)/(10+(30 · 10))=0.97 (Figure 1 , Table D).

However, the estimation of the standard error (SE) of the predictive values derived from a nested case–control diagnostic accuracy study is not at all straightforward. When simply using the standard formula for the SE of a proportion ( π 1 − π n , where π is the proportion, here predictive value or absolute disease probability, and n the number of patients, the question is which value for n to use. The actual observed (measured) number of cases and controls does not correspond to the estimated proportion (too low). But simply using the upwardly corrected number of controls and (if also sampled) cases, falsely increases the number as if they were all observed, yielding too small SE’s. Clearly, modifications have to be made to the standard formulas, to estimate the correct SE of the predicted values of a diagnostic index test from a nested case–control study.

Recently Mercaldo and colleagues published a approach to estimate the SE of predictive values for a case–control approach [ 12 ].We compared the approach proposed by Mercaldo with five other approaches using simulated datasets based on an empirical published diagnostic study among patients suspected of deep venous thrombosis. We studied several clinically relevant combinations of disease prevalence and case–control ratios.

Patient data

We used data from a published cross-sectional diagnostic study that collected a cohort of 2086 adult patients suspected of deep vein thrombosis (DVT) in primary care [ 13 , 14 ]. In brief, the general practitioners systematically documented information on patient history and physical examination. Physical examination included swelling of the affected limb and difference in circumference of the calves calculated as the circumference (in centimeters) of affected limb minus circumference of unaffected limb, further referred to as calf difference test. The calf-difference was considered to be abnormal if the difference in circumference between the legs was more then 3 cm. Subsequently, all patients underwent D-dimer testing.

Depending on the hospital to which the patient was referred in the original study the ELISA approach (VIDAS, Biomerieux, France) or the latex assay approach (Tinaquant, Roche, Germany) was used to determine the D-dimer level. The test was considered abnormal if the latex assay yielded a D-dimer level ≥400 ng/mL (Tinaquant, Roche, Germany) or ≥500 ng/mL for the ELISA assay (VIDAS, Biomerieux, France) [ 15 ]. Values were dichotomized: normal versus abnormal. In the present approachological study, we focus on the calf difference and D-dimer test as index tests. Presence of DVT (yes/no) was assessed in all patients with the reference test (repeated compression ultrasonography of the symptomatic leg).

Nested case–control samples

We first studied a source population based on the original data set (Figure 2 , line 1), with a prevalence of DVT of 0.1 (140 cases, 1260 controls), reflecting a relatively rare disease situation that commonly directs case–control studies (Figure 2 , line 2). The diagnostic accuracy parameters estimated for this source population serve as the commonly unknown true parameter values (see below and Table 1 ). Subsequently, we mimicked a cross-sectional cohort study of the same size as the source population, i.e. 1400 patients that were drawn with replacement from our source population (cohort, Figure 2 , line 3).

figure 2

Flow-chart of the sampling process of the nested case–control samples. This process was repeated for a deep venous thrombosis prevalence in the source population of 0.05, 0.1 and 0.2. “n” in the nested case–control sample represents the average sample (cases plus controls) size across 1000 samples.

A nested case–control sample was then created from the cohort (Figure 2 , line 4). We included all patients with DVT (cases) from the corresponding cohort in the nested case–control sample, and an equally sized random sample from the subjects without DVT (controls): case–control ratio = 1:1. To prevent too much sampling errors (random variation), we repeated the above approach 1000 times, creating 1000 study cohorts from the source population and hence 1000 nested case–control samples. In the 1000 nested case–control samples we estimated the predictive values of both index tests and their uncertainty (standard error and 95% CI) using the six approaches described below. All this was also done for three other case–control ratios: 2 controls per case (ratio 1:2); 3 controls per case (1:3); and 4 controls per case (1:4). The prevalence of the 1000 cohorts was thus not fixed across the different cohorts, though with a mean prevalence of 0.1 (95% CI 0.08-0.12). The actual prevalence of the corresponding cohort was used for all subsequent calculations in the nested case–control sample.

Finally, the entire process of creating the 1000 study cohorts and 1000 corresponding nested-case control samples (with the four different case–control ratios), was repeated for a source population (n=1400) with a DVT prevalence of 0.05 (70 cases) and 0.2 (280 cases).

Approaches to estimate the uncertainty of predictive values of a diagnostic test from a nested case–control study

We compared six approaches to estimate the 95% CI of the predictive values obtained from the nested case–control samples, for the two index tests (D-dimer test and calf circumference difference). The point estimates of the predictive values were obviously the same for all six approaches, while the standard error estimates and hence 95% CI could vary. We describe the approaches for the predictive value of a positive result (positive predictive value = PPV). They can mutatis mutandis be applied to the negative predictive value (NPV). Notations used below, refer to those used in Figure 1 (see legend of Figure 1 for explanation of variable names).

1. Estimate the standard error of the PPV (SE(PPV)) using the standard formula for the SE of a proportion with the actually observed number of patients in the nested case–control sample:

The 95% confidence interval can simply be calculated as PPV ± 1.96*SE(PPV)Calculating the SE with the actually observed numbers in the nested case–control samples (i.e. without correction for the sampling fraction that is used to estimate the correct PPV, using the upweighting by the samping fraction as shown in Table 1 ), agrees to the number of patients actually measured. However, the proportions in approach (1) do not correspond to the e stimated (corrected) PPV.

2. Estimate the SE(PPV) using the standard formula for SE of a proportion with correction for the sampling fraction in the numerator of approach 1 above, but not in the denominator:

The correction is only applied to the numerator as this reflects the (corrected) PPV estimates. Applying the correction also to the denominator, would make the SE incorrectly too small: a larger number of patients than actually observed would then be used in the SE estimation.

3. Assess the empirical distribution of the PPV using a bootstrap procedure. Per nested case–control sample we drew 1000 bootstrap samples and estimated the PPV in each bootstrap sample. The PPV values corresponding to the 2.5 and 97.5 percentiles of the 1000 bootstrap estimates were used as the limits of the 95% confidence interval.

4. The approach recently described by Mercaldo and colleagues [ 12 ]. This approach uses the prevalence from the underlying study cohort (not to be confused with our ‘true’ source population, see above) and the sensitivity and specificity estimated from the case–control sample to calculate the correct PPV. Not only the PPV can be estimated using the sensitivity (Sens) , specificity (Spec) and prevalence (p), but also the SE(PPV):

5. Weighted logistic regression. This is an ordinary logistic regression model with outcome disease present (y/n) and one covariable (index test result, positive or negative), with weights for cases and controls. The model can be written as log odds(PPV) = log p p v 1 − p p v = α + β ×. With × =1 for a positive index test result. Each case receives a weight w(cases) = N 1 N (rather than simply weight 1) and each non-case receives weight w(non-cases) = N 1 N · 1 Sampling fraction . Hence, the sum of the weights over all sampled subjects equals the total number of subjects in the nested case–control sample (N 1 ). This sum equals the effective sample size in the estimations of the PPV and SE(PPV). Results of the so weighted regression analysis are the intercept (α) and the regression coefficient for the index test (β). The standard error of the logit(PPV) can be calculated from the covariance matrix SE(logit[PPV]) = var i a n c e α + var i a n c e β + 2 · cov a r α , β The covariance matrix is estimated with the correct number of observed (N1) patients, since case and controls were weighted in the analysis.

6. Use the approach by Mercaldo and colleagues (approach 3) [ 12 ] on the log odds scale. One uses the sensitivity (Sens) , specificity (Spec) and prevalence (p) in the known study cohort, to estimate the SE of the logit(PPV) by:

Statistical analysis

The PPVs of both index tests were thus calculated using the weighting approach from Figure 1 . We then estimated the 95% confidence interval of the PPV using the six approaches above. From the 1000 nested case–control samples, the average 95% confidence interval width and the coverage probability were estimated. The narrower the average confidence interval width, the more precise the estimated predictive value[ 16 ]. The coverage probability is the proportion of the 1000 confidence intervals that included the true PPV estimated from of the source population. The coverage should not fall outside two SE’s of the nominal probability (p) [ 16 ]. Nominal p is 0.05 for a 95% confidence interval, with SE(nominal p) = 0.0069 for a simulation study with 1000 repetitions (Se(p) = p 1 − p B , with B the number of repetitions). The corresponding coverage ranges from 0.936 – 0.964. If the coverage probability of the PPV’s falls outside this interval we speak of “substantial undercoverage” for lower coverage probability (<0.936), or overcoverage for higher (>0.964) coverage probability.

The ideal estimation approach has a coverage close to 95% and a small 95% confidence interval of the estimated predictive values.

All analyses were executed for the four case–control ratios, and for the three different disease prevalence’s in the source population.

Analyses were performed with R version 2.6.0 [ 17 ].

Table 1 shows the accuracy estimates of both index tests as estimated from the source population. The PPV of both tests was low and the NPV of both tests was high as a result of the low prevalence of DVT. For both tests, the PPV increased and NPV decreased with increasing prevalence of DVT. The D-dimer test was very sensitive with limited specificity. The calf difference test was moderately sensitive and specific. The D-dimer test was positive in 978 (70%) patients for a DVT prevalence of 0.1. The calf-difference test was positive in 568 (41%) patients. Changing the prevalence of diseases did not change the percentage of positive tests. As expected, for both tests, the sensitivity, specificity and diagnostic odds ratio were similar for each prevalence. The point estimate for the PPV and NPV obtained with weighted logistic regression were similar (respectively 0.14 and 0.99) to those obtained with the standard approach.

Approaches one, two and five showed clear overcoverage at low prevalences of 0.05 and 0.1 in the cohorts for all case–control ratios. They showed less overcoverage at a prevalence of 0.20 and even an undercoverage (Figure 3 and 4 , approach 5). Approach three yielded slight overcoverage for lower case–control ratios (1:1, 1:2) and for low prevalences (0.05 and 0.01). Approaches four and six showed undercoverage for higher case–control ratios (1:3, 1:4). Extreme undercoverage was seen at a prevalence of 0.20 (Figure 3 and 4 , left panels) for both approach four and six.

figure 3

For each estimation approach (for details see text) and per deep venous thrombosis prevalence, the plot of positive predictive value coverage probabilities and 95% confidence interval width per approach for different prevalence’s for the D-dimer test (1 = Standard formula for obtaining a standard error of a proportion, 2 = as 1 st approach but with correction for sampling fraction, 3 = Bootstrap procedure, 4 = Mercaldo and colleagues approach, 5 = Weighted logistic regression, 6 = Logit transformation of Mercaldo and colleagues approach). Colors/figures represent the different sampling fractions (Black circle = 1 case: 1 control, Red square = 1:2, Blue diamond = 1:3, Yellow triangle = 1:4). The vertical lines represent the ideal 95% coverage with its confidence interval, i.e. the levels of acceptability.

figure 4

For each estimation approach (for details see text) and per deep venous thrombosis prevalence, the plot of the positive predictive value coverage probabilities and the 95% confidence interval width, for the calf difference test. 1 = Standard formula for obtaining a standard error of a proportion, 2 = as 1 st approach but with correction for sampling fraction, 3 = Bootstrap procedure, 4 = Mercaldo and colleagues approach, 5 = Weighted logistic regression, 6 = Logit transformation of Mercaldo and colleagues approach). Colors/figures represent the different sampling fractions (Black circle = 1 case: 1 control, Red square = 1:2, Blue diamond = 1:3, Yellow triangle = 1:4). The vertical lines represent the ideal 95% coverage with its confidence interval, i.e. the levels of acceptability.

In general, approach one showed the largest confidence interval width corresponding to the overcoverage, whereas approach four and six showed very similar and small widths. Approach three showed slightly larger widths then approach four and six (Figure 3 and 4 , right panels).

We compared six approaches for estimating the confidence intervals of predictive values or post-test probabilities of diagnostic test results when a nested case–control design is used. using simulations in a large empirical diagnostic study, the six approaches were compared in terms of coverage and the width of the 95% confidence intervals. Our data show that a bootstrap procedure (approach 3) seems to be the preferred approach, although it was only slightly better than the other approaches. Approaches 4 and 6 showed some undercoverage, particularly for the D-dimer test with frequent positive results (positive results around 70%). Approaches 1, 2 and 5 showed overcoverage. For a prevalence of 0.2 in the underlying cohort and a case–control ratio of 1:4 all approaches showed substantial undercoverage. In fact a case–control ratio of 1:4 implies a prevalence of 0.2 in the nested case–control sample. Hence, one may argue that a full cohort study is to be preferred, when the disease prevalence in the cohort is 0.2 or higher. Indeed, case–control studies are notably advantageous when the prevalence of a disease in the cohort is rare (i.e. below 0.1).

By applying a nested case–control design in diagnostic accuracy studies the number of patients undergoing the index test can be substantially reduced, hereby increasing the efficiency of the particular study [ 6 , 8 , 10 , 11 ]. This becomes more important if the index test comes with large patient burden, is costly, the disease is rare, and when stored biological material is used for measuring new tests, e.g. from proteomics, metabolomics or genomics. Previously it has been shown that by applying a correction for the sampling fraction precise point estimates of the predictive values can be obtained [ 5 ]. We found that applying a bootstrap procedure to estimate the confidence intervals around these predictive values, yields adequate results for the uncertainty in the estimated predictive values. Limitation of this approach can be that, due to the low numbers, in some of the bootstrap samples one of the cells of the 2×2 table remains empty, The latter did not happened in our simulation. If this happens PPV may be estimated with a continuity correction for low numbers.

The predictive values obtained with the approach recently discussed by Mercaldo and colleagues were equal to those derived with the weighted approach from Figure 1 . For the lower prevalence’s (0.05 and 0.10) the coverage of approaches 4 and 6 was between 0.90 and 0.95 which were similar to those found by Mercaldo and colleagues themselves [ 12 ]. With increasing case–control ratio and increasing prevalence, the Mercaldo and colleagues approach yielded more undercoverage. This could be due to the fact that in their original paper the case–control ratio was not explicitly varied, although in their equation the case–control ratio implicitly has influence on the SE and hence the confidence interval. Besides the study by Mercaldo and colleagues we are not aware of any other studies coping with this issue of uncertainty of predictive values estimated from nested case control studies.

A limitation of our study could be that we looked at only one original cohort in our simulations and studied only two index tests. Although the results for the different combinations simulated are alike, it is thinkable that for other combinations of disease prevalence, cohort sizes, and diagnostic accuracy of the index tests, the results could slightly differ. We certainly realize that DVT is not a true rare disease and most diagnostic studies on DVT are done on a full-cohort and not on a nested case–control sample. Therefore, we slightly modified the prevalence in the full cohort to better mimick the rare-disease situation, which we needed for our comparisons.

By using a fixed cohort size (n=1400) for the different prevalence’s, the size of the nested case–control samples varied (Figure 2 ). This could have influenced our results slightly since the SE and the confidence interval depends on the number of observations. Alternatively one could use a fixed number of cases in with varying cohort sizes for different prevalence’s.

Our case-study suggests that in diagnostic accuracy studies using a nested case–control design, one can apply a simple bootstrap procedure to obtain a confidence interval for the post-test probabilities or predictive values of the index test results. For our data-set, the bootstrap procedure showed the best combination of coverage and 95% confidence interval width, compared with the other approaches. Our findings and inferences can also be applied to nested case control studies that investigate the predictive values of results from other kind of tests, for example prognostic tests.

The study was supported by grants of ZonMw, the Netherlands organization for health research and development (project numbers 945-27-009, 918-10-615 and 912-08-004).

Fryback DG, Thornbury JR: The effecacy of diagnostic imaging. Med Decis Making. 1991, 11 (2): 88-94. 10.1177/0272989X9101100203.

Article   CAS   PubMed   Google Scholar  

Gluud C, Gluud LL: Evidence based diagnostics. BMJ. 2005, 330 (7493): 724-726. 10.1136/bmj.330.7493.724.

Article   PubMed   PubMed Central   Google Scholar  

Mackenzie R, Dixon AK: Measuring the effects of imaging: an evaluative framework. Clin Radiol. 1995, 50: 513-518. 10.1016/S0009-9260(05)83184-8.

Moons KGM, Biesheuvel CJ, Grobbee DE: Test Research versus Diagnostic Research. Clin Chem. 2004, 50 (3): 473-476. 10.1373/clinchem.2003.024752.

Biesheuvel CJ, Vergouwe Y, Oudega R, Hoes AW, Grobbee DE, Moons KG: Advantages of the nested case–control design in diagnostic research. BMC Med Res Approachol. 2008, 8: 48-10.1186/1471-2288-8-48.

Article   Google Scholar  

Rutjes AW, Reitsma JB, Vandenbroucke JP, Glas AS, Bossuyt PM: Case–control and two-gate designs in diagnostic accuracy studies. Clin Chem. 2005, 51 (8): 1335-1341. 10.1373/clinchem.2005.048595.

Pepe MS, Feng Z, Janes H, Bossuyt PM, Potter JD: Pivotal evaluation of the accuracy of a biomarker used for classification or prediction: standards for study design. J Natl Cancer Inst. 2008, 100 (20): 1432-1438. 10.1093/jnci/djn326.

Article   CAS   PubMed   PubMed Central   Google Scholar  

Ernster VL: Nested case–control studies. Prev Med. 1994, 23 (5): 587-590. 10.1006/pmed.1994.1093.

Baker SG, Kramer BS, Srivastava S: Markers for early detection of cancer: statistical guidelines for nested case–control studies. BMC Med Res Approachol. 2002, 2: 4-10.1186/1471-2288-2-4.

Mantel N: Synthetic retrospective studies and related topics. Biometrics. 1973, 29 (3): 479-486. 10.2307/2529171.

Essebag V, Genest J, Suissa S, Pilote L: The nested case–control study in cardiology. Am Heart J. 2003, 146 (4): 581-590. 10.1016/S0002-8703(03)00512-X.

Article   PubMed   Google Scholar  

Mercaldo ND, Lau KF, Zhou XH: Confidence intervals for predictive values with an emphasis to case–control studies. Stat Med. 2007, 26 (10): 2170-2183. 10.1002/sim.2677.

Oudega R, Hoes AW, Moons KG: The Wells rule does not adequately rule out deep venous thrombosis in primary care patients. Ann Intern Med. 2005, 143 (2): 100-107.

Oudega R, Moons KG, Hoes AW: Limited value of patient history and physical examination in diagnosing deep vein thrombosis in primary care. Fam Pract. 2005, 22 (1): 86-91.

Oudega R, Toll DB, Bulten RJ, Hoes AW, Moons KG: Different cut-off values for two D-dimer assays to exclude deep venous thrombosis in primary care. Thromb Haemost. 2006, 95 (4): 744-746.

CAS   PubMed   Google Scholar  

Burton A, Altman DG, Royston P, Holder RL: The design of simulation studies in medical statistics. Stat Med. 2006, 25 (24): 4279-4292. 10.1002/sim.2673.

R Development Core Team, R: A language and environment for statistical computing. 2007, R foundation for statistical computing, Vienna, Austria, 260

Google Scholar  

Pre-publication history

The pre-publication history for this paper can be accessed here: http://www.biomedcentral.com/1471-2288/12/166/prepub

Download references

Author information

Authors and affiliations.

Julius Center for Health Sciences and Primary Care And Division of Anesthesiology, Intensive Care Care and Emergency Medicine, University Medical Center Utrecht, PO box 85500, 3508 GA, Utrecht, The Netherlands

Bas van Zaane & Karel GM Moons

Julius Center for Health Sciences and Primary Care, University Medical Center Utrecht, PO box 85500, 3508 GA, Utrecht, The Netherlands

Yvonne Vergouwe

Department of Epidemiology, Biostatistics and Health Technology Assessment, Radboud University Nijmegen Medical Center, PO Box 9101, 6500 HB, Nijmegen, The Netherlands

A Rogier T Donders

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Bas van Zaane .

Additional information

Competing interests.

The authors declare that they have no competing interests.

Authors' contributions

BvZ designed the study, performed the analysis and drafted the manuscript. YV designed the study and drafted the manuscript. ART Donders designed the study and drafted the manuscript. KGMM designed the study and drafted the manuscript. All authors read and approved the final manuscript.

Authors’ original submitted files for images

Below are the links to the authors’ original submitted files for images.

Authors’ original file for figure 1

Authors’ original file for figure 2, authors’ original file for figure 3, authors’ original file for figure 4, rights and permissions.

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License ( http://creativecommons.org/licenses/by/2.0 ), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and Permissions

About this article

Cite this article.

van Zaane, B., Vergouwe, Y., Donders, A.R.T. et al. Comparison of approaches to estimate confidence intervals of post-test probabilities of diagnostic test results in a nested case-control study. BMC Med Res Methodol 12 , 166 (2012). https://doi.org/10.1186/1471-2288-12-166

Download citation

Received : 03 October 2011

Accepted : 26 October 2012

Published : 31 October 2012

DOI : https://doi.org/10.1186/1471-2288-12-166

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

BMC Medical Research Methodology

ISSN: 1471-2288

95 confidence interval case control study

IMAGES

  1. PPT

    95 confidence interval case control study

  2. The 95% confidence intervals for real data I

    95 confidence interval case control study

  3. The 95% confidence interval (CI) for the hazard ratio for each study is

    95 confidence interval case control study

  4. PPT

    95 confidence interval case control study

  5. PPT

    95 confidence interval case control study

  6. PPT

    95 confidence interval case control study

VIDEO

  1. Confidence Interval Two Samples

  2. Hands on with R Part

  3. Using SPSS Estimating a 95% Confidence Interval

  4. Statistics Test Review Confidence Intervals and Hypothesis Testing

  5. 114 Testing of Hypothesis & Confidence Interval: Lecture V

  6. L5

COMMENTS

  1. What Is a 99 Percent Confidence Interval?

    A confidence interval indicates how uncertain a researcher is about an estimated range of values. A 99 percent confidence interval indicates that if the sampling procedure is repeated, there is a 99 percent chance that the true average actu...

  2. What Is a Case Study?

    When you’re performing research as part of your job or for a school assignment, you’ll probably come across case studies that help you to learn more about the topic at hand. But what is a case study and why are they helpful? Read on to lear...

  3. Why Are Case Studies Important?

    Case studies are important because they help make something being discussed more realistic for both teachers and learners. Case studies help students to see that what they have learned is not purely theoretical but instead can serve to crea...

  4. Confidence Intervals for Measures of Association

    ... computing a risk ratio, a risk difference (or rate ratios and rate differences), or, in the case of a case-control study, an odds ratio.

  5. C. Confidence Intervals for the Odds Ratio

    In case-control studies it is not possible to estimate a relative risk, because the denominators of the exposure groups are not known with a

  6. 17: Case-Control Studies (Odds Ratios) 4/7/07

    How do you use a 95% confidence for the odds ratio to determine statistical significance at alpha = 0.05? Which of the following 95% confidence interval for

  7. 17: Odds Ratios From Case-Control Studies

    The prior chapter use risk ratios from cohort studies to quantify ... The 95% confidence interval for the ψ = e1.7299 ± (1.96)(0.1752)

  8. Appendix D: Method for Determining the Confidence Interval Around

    An approximate 95% confidence interval (CI) around the point estimate of the odds ratio (OR) for an unmatched case–control study can be calculated using the

  9. Odds Ratio

    Calculating the 95% confidence interval for our previous hypothetical population we get:.

  10. Software for attributable risk and confidence interval estimation in

    It is suited to analysis of the relationship between risk factors and disease in case-control studies with simple random sampling of controls, in terms of

  11. Comparison of approaches to estimate confidence intervals of post

    Nested case–control studies become increasingly popular as they can be very ... The 95% confidence interval can simply be calculated as PPV

  12. Analysis of data from case-control studies

    Quick review of the design of case –control studies. • Calculating Odds ratios. • 95% confidence interval for Odds ratios. • Relationship between odds ratio

  13. Calculating Confidence Intervals For Relative Risks (Odds Ratios

    The 95% confidence interval for the population value of R is then given as: e<>8782 t0 e12094 that is, from 2-41 to 3-35. UNMATCHED CASE-CONTROL STUDY.

  14. Interpreting confidence intervals for the odds ratio

    Reading research, biostatistics--Risk factors and protective associations--Associated hypothesis tests-- USMLE biostats.