The EPHPP tool — formally known as the Effective Public Health Practice Project Quality Assessment Tool for Quantitative Studies — is a validated critical appraisal instrument developed in Canada to evaluate the methodological quality of quantitative public health research. It is widely used in systematic reviews and evidence summaries to determine whether a study’s design and execution are sound enough to inform public health policy.

This page documents the tool for reference purposes. All credit for the instrument belongs to its original authors and the Effective Public Health Practice Project (EPHPP).

Understanding EPHPP and Quality Assessment Tool

The Quality Assessment Tool for Quantitative Studies is a tool developed in Canada by the EPHPP with the financial support of the Ontario Ministry of Health and Long-Term Care (MOHLTC). Development began as early as 1998, and it was a direct result of another public health initiative designed to bolster public health, named the Ontario Mandatory Health Programs and Service Guidelines (MHPSG of 1997). The tool was developed by four individuals leading the EPHPP project:

  • D. Ciliska
  • S. Micucci
  • M. Dobbins
  • B.H. Thomas

The purpose of the initiative is straightforward. It is entirely focused on developing a method that can test and provide evidence to support public health interventions and research. In its essence, this assessment tool is created to address articles in a wide range of health-related topics, from family and sexual health to fighting chronic diseases, injuries, and even substance abuse.

To reach a scientific conclusion, the Quality Assessment Tool for Quantitative Studies uses a number of factors along with opinions of select experts who meet pre-determined criteria. Once the assessment is fulfilled, each examined practice receives a mark ranging between “strong,” “moderate,” and “weak” in eight categories:

  1. Selection Bias — Examines how study participants were selected and whether they represent the target population.
  2. Study Design — Evaluates the appropriateness of the chosen design (e.g. RCT, cohort, cross-sectional) to the research question.
  3. Confounders — Assesses whether key confounding variables were identified and adequately controlled.
  4. Blinding — Determines whether participants and outcome assessors were blinded, as applicable to the study design.
  5. Data Collection Methods — Reviews whether the instruments used to gather data were valid and reliable.
  6. Withdrawals & Dropouts — Considers whether dropout rates and reasons were reported and handled appropriately in the analysis.
  7. Intervention Integrity — Evaluates whether the intervention was delivered consistently and as planned. Note: marked N/A for observational studies.
  8. Analysis — Appraises whether the statistical methods are appropriate to the study design and outcome variables.

Global Rating Scale

Once all eight domains are scored, an overall methodological quality rating is assigned:

• Strong — Most criteria clearly met; minimal risk of bias.
• Moderate — Several criteria met; some risk of bias present.
• Weak — Few criteria met; significant risk of bias.

These categories are considered to be universally relevant to any health topic and can be approached with scientific determination as opposed to assigning arbitrary results. Each participant takes between 30 and 60 minutes to complete an assessment, which guarantees that more trained eyes can examine each individual article.

Online Gambling and Public Health Research in Canada

In the digital age, the impact of online activities on public health has become a growing area of study. This includes online gambling, which has seen a significant increase with the rise of internet accessibility. It is important for public health researchers to assess the effects of such activities on mental health, including the risks of addiction or financial stress.

Studies can apply the Quality Assessment Tool for Quantitative Studies to evaluate the research done on online casinos and gambling behaviours. Understanding these impacts can help public health officials develop programs to educate and protect individuals engaging in online gambling. For instance, assessing the quality of studies on Canadian online casinos could provide insights into gambling trends and their effects on the population. This list of Canadian online casinos can provide a starting point for such research.

How Do We Know the Evaluation Is Properly Carried Out?

The authors of the research methodology ensured that the EPHPP tool meets several standards, specifically linked to the validity of the tool, the manner of evaluation, and readability. For example, to confirm readability, two individual studies were used with reviewers assessing them independently and drawing the same conclusions.

Validity was established by assigning an independent team tasked with assessing whether the selection criteria guaranteed clarity and completeness, and whether the results drawn by individual researchers were similarly based on the suggested methodology.

It has been concluded that the EPHPP tool has a very strong methodological rating, which allows it to be used to conduct assessments on public health articles.

Who Carries Out the Assessment?

There are strict criteria as to who may carry out an assessment using the Quality Assessment Tool for Quantitative Studies. As per the tool criteria, a group of four to six experts works together on a variety of issues. Each study must be scored independently by at least two reviewers, who then compare their ratings and resolve any disagreements through discussion before a final score is agreed upon. The assessment experts generate questions that pertain directly to the research, look for additional sources that prove or test the validity of the examined research, and are responsible for the proper approach to how data is compiled, as well as results synthesis and dissemination.

The EPHPP recommends that at least two of the experts have the necessary subject-matter background, while at least one individual holds the methodological expertise to conduct an assessment.

Each report must also be reviewed by five individual medical experts — known as peers in research terminology.

Tool implementation

Development of the tool began in 1998. It was formally published in 2004 and last revised in 2010 — the 2010 edition is the current version.

For the successful implementation of the method, the authors created a supplementary guide called the “Dictionary for the Effective Public Health Practice Project Quality Assessment Tool for Quantitative Studies.”

This reference document has proven an efficient way to support consistent evaluation of public health articles. The tool does not prescribe a standardized sequence of actions, but the EPHPP requires that the full evaluation be completed based on all criteria, with the required number of experts present.

Other interesting articles:

Mobile Menu