How to Study Epidemiology: 10 Proven Techniques
Epidemiology is the foundational discipline of public health, requiring you to design studies that isolate causal effects in messy human populations where you cannot control variables the way a lab scientist can. Success depends on developing rigorous methodological thinking — the ability to identify bias, confounding, and chance as alternative explanations for any observed association.
Why epidemiology Study Is Different
Unlike laboratory sciences where you control the experiment, epidemiology works with observational data from free-living human populations where confounding is everywhere, measurement is imperfect, and ethical constraints prevent many experimental designs. The discipline requires a particular kind of critical thinking: always asking 'could this association be explained by something other than a causal relationship?'
10 Study Techniques for epidemiology
Classic Study Walkthroughs
Study the landmark epidemiological investigations (Framingham Heart Study, Doll and Hill smoking studies, John Snow's cholera investigation) to see methods applied to real problems. These studies illustrate principles far better than abstract descriptions.
How to apply this:
For each classic study, identify the study design, the exposure and outcome, the comparison groups, the potential biases, and how the investigators addressed them. Write a one-page critique as if you were a peer reviewer.
Measures of Association Hand Calculation
Calculate relative risk, odds ratio, attributable risk, and number needed to treat by hand until you can do them without thinking. These measures are the language of epidemiology, and fluency requires practice beyond conceptual understanding.
How to apply this:
Create a set of 2x2 tables with different data. For each, calculate RR, OR, AR, AR%, PAR, and NNT. Interpret each measure in a plain-language sentence. Practice until you can set up the table and compute all measures in under 5 minutes.
Bias Identification in Published Papers
Practice identifying selection bias, information bias, and confounding in published epidemiological papers. The ability to critically appraise a study's validity is the most important skill in epidemiology.
How to apply this:
Read one epidemiological paper per week. For each, list potential sources of selection bias, information bias (recall bias, interviewer bias, misclassification), and confounding. Assess whether the authors adequately addressed each threat to validity.
DAG (Directed Acyclic Graph) Practice
Use DAGs to reason about causal structures before choosing an analysis strategy. DAGs make confounding, mediation, and collider bias visually explicit and prevent common analytical errors like adjusting for a collider.
How to apply this:
For each study you read or design, draw a DAG showing the hypothesized causal relationships between exposure, outcome, and potential confounders. Use the DAG to determine which variables to adjust for and which to leave unadjusted.
Study Design Comparison Framework
Build a systematic comparison framework for the major study designs (cohort, case-control, cross-sectional, RCT, ecological) covering strengths, weaknesses, appropriate measures of association, and when each is used.
How to apply this:
Create a comparison table with study designs as columns and features as rows: temporality, measure of association (RR vs. OR), susceptibility to specific biases, cost, time, and ethical considerations. Memorize the key distinctions.
Confounding Control Method Practice
Practice applying different methods to control confounding (restriction, matching, stratification, multivariable regression, randomization) and understanding when each is appropriate. Confounding control is the central methodological challenge of epidemiology.
How to apply this:
For a given study scenario, apply each confounding control method and explain how it works, what it costs (reduced sample size, reduced generalizability), and when it fails. Compare stratified analysis results to crude results to see confounding in action.
p-Value and Confidence Interval Interpretation
Practice interpreting p-values and confidence intervals correctly, not just categorically ('significant vs. not significant'). Nuanced interpretation is a mark of epidemiological sophistication and a common exam focus.
How to apply this:
For 10 study results, write a correct interpretation of the p-value and confidence interval. Avoid the common errors: p-values are NOT the probability the null hypothesis is true, and confidence intervals are NOT the range where the true value definitely lies.
Outbreak Investigation Simulations
Work through simulated outbreak investigations step by step, from case identification through hypothesis testing to intervention. This integrates study design, measures of association, and practical decision-making.
How to apply this:
Use CDC outbreak investigation case studies or textbook exercises. Follow the standard steps: confirm the diagnosis, define a case, describe the outbreak (time, place, person), develop hypotheses, test with analytical studies, implement control measures.
Teach-Back Epidemiological Concepts
Explain epidemiological concepts to a non-epidemiologist using plain language and real-world examples. If you cannot explain confounding using a clear example, you do not truly understand it at a level that will serve you in practice.
How to apply this:
Explain to a non-scientist friend: what is confounding? (Use the classic coffee-lung cancer-smoking example.) What is selection bias? What is the difference between association and causation? Gauge understanding by asking them to identify confounding in a new example.
Screening Test Calculation Drills
Practice calculating and interpreting sensitivity, specificity, positive predictive value, and negative predictive value. These measures are fundamental to understanding screening programs and diagnostic tests in public health.
How to apply this:
Work through 2x2 tables for screening tests with different prevalences. Calculate all four measures and explain how prevalence affects predictive values. Practice the clinical interpretation: what does a positive test actually mean for this patient?
Sample Weekly Study Schedule
| Day | Focus | Time |
|---|---|---|
| Monday | Study design and methods | 55m |
| Tuesday | Calculations and measures | 50m |
| Wednesday | Critical appraisal | 60m |
| Thursday | Classic studies and outbreak investigation | 55m |
| Friday | Interpretation and teaching | 40m |
| Saturday | Extended paper reading and analysis | 75m |
| Sunday | Review calculations and frameworks | 30m |
Total: ~6 hours/week. Adjust based on your course load and exam schedule.
Common Pitfalls to Avoid
Defining confounding but failing to identify it in real study designs, where confounders are not labeled for you
Treating p < 0.05 as a binary cutoff for truth rather than understanding it as one piece of evidence within a larger picture
Confusing relative risk and odds ratio, or using relative risk from a case-control study where only the odds ratio is valid
Memorizing study designs in the abstract without understanding which specific biases each design is susceptible to and resistant to
Adjusting for variables in a regression model without first reasoning about the causal structure using a DAG, which can introduce bias rather than remove it