Review and special article
How We Design Feasibility Studies

https://doi.org/10.1016/j.amepre.2009.02.002Get rights and content

Abstract

Public health is moving toward the goal of implementing evidence-based interventions. To accomplish this, there is a need to select, adapt, and evaluate intervention studies. Such selection relies, in part, on making judgments about the feasibility of possible interventions and determining whether comprehensive and multilevel evaluations are justified. There exist few published standards and guides to aid these judgments. This article describes the diverse types of feasibility studies conducted in the field of cancer prevention, using a group of recently funded grants from the National Cancer Institute. The grants were submitted in response to a request for applications proposing research to identify feasible interventions for increasing the utilization of the Cancer Information Service among underserved populations.

Introduction

The field of health promotion and disease prevention is moving toward the goal of implementing evidence-based interventions that have been rigorously evaluated and found to be both efficacious and effective. This will encourage the evaluation of the efficacy of additional interventions, using standards of the sort applied in the evidence reviews conducted by the Cochrane Collaboration (www.cochrane.org) and the Task Force on Community Preventive Services (www.thecommunityguide.org).

By intervention is meant any program, service, policy, or product that is intended to ultimately influence or change people's social, environmental, and organizational conditions as well as their choices, attitudes, beliefs, and behaviors. Both early conceptual models of health education1 and more modern versions of health promotion2 indicate that interventions should focus on changeable behaviors and objectives; be based on critical, empirical evidence linking behavior to health; be relevant to the target populations; and have the potential to meet the intervention's goals. In cancer prevention and control, intervention efficacy has been defined as meeting the intended behavioral outcomes under ideal circumstances. In contrast, effectiveness studies can be viewed as evaluating success in real-world, non-ideal conditions.3

Clearly, because of resource constraints, not all interventions can be tested for both efficacy and effectiveness. Guidelines are needed to help evaluate and prioritize those interventions with the greatest likelihood of being efficacious. Feasibility studies are relied on to produce a set of findings that help determine whether an intervention should be recommended for efficacy testing. The published literature does not propose standards to guide the design and evaluation of feasibility studies. This gap in the literature and in common practice needs to be filled as the fields of evidence-based behavioral medicine and public health practice mature.

This article presents ideas for designing a feasibility study. Included are descriptions of feasibility studies from all phases of the original cancer-control continuum: from basic social science to determine the best variables to target, through methods development, to efficacy and effectiveness studies, to dissemination research. The term feasibility study is used more broadly than usual to encompass any sort of study that can help investigators prepare for full-scale research leading to intervention. It is hoped that this article can prove useful both to researchers when they consider their own intervention design and to reviewers of intervention-related grants.

Feasibility studies are used to determine whether an intervention is appropriate for further testing; in other words, they enable researchers to assess whether or not the ideas and findings can be shaped to be relevant and sustainable. Such research may identify not only what—if anything—in the research methods or protocols needs modification but also how changes might occur. For example, a feasibility study may be in order when researchers want to compare different research and recruitment strategies. Gustafson4 found that African-American women report more mistrust of medical establishments than do white women. A feasibility study might qualitatively examine women's reactions to a specific intervention handout that attempted to promote the trustworthiness in a medical institution. If women's reactions were positive and in line with increased trust in the institution, the feasibility study would have served as a precursor to testing the effects of that handout in recruiting women to a randomized prevention trial.5

Performing a feasibility study may be indicated when:

  • community partnerships need to be established, increased, or sustained;

  • there are few previously published studies or existing data using a specific intervention technique;

  • prior studies of a specific intervention technique in a specific population were not guided by in-depth research or knowledge of the population's sociocultural health beliefs; by members of diverse research teams; or by researchers familiar with the target population and in partnership with the targeted communities;

  • the population or intervention target has been shown empirically to need unique consideration of the topic, method, or outcome in other research; or

  • previous interventions that employed a similar method have not been successful, but improved versions may be successful; or previous interventions had positive outcomes but in different settings than the one of interest.

It is proposed that there are eight general areas of focus addressed by feasibility studies. Each is described below and summarized in Table 1.

  • Acceptability. This relatively common focus looks at how the intended individual recipients—both targeted individuals and those involved in implementing programs—react to the intervention.

  • Demand. Demand for the intervention can be assessed by gathering data on estimated use or by actually documenting the use of selected intervention activities in a defined intervention population or setting.

  • Implementation. This research focus concerns the extent, likelihood, and manner in which an intervention can be fully implemented as planned and proposed,6 often in an uncontrolled design.

  • Practicality. This focus explores the extent to which an intervention can be delivered when resources, time, commitment, or some combination thereof are constrained in some way.

  • Adaptation. Adaptation focuses on changing program contents or procedures to be appropriate in a new situation. It is important to describe the actual modifications that are made to accommodate the context and requirements of a different format, media, or population.7

  • Integration. This focus assesses the level of system change needed to integrate a new program or process into an existing infrastructure or program.8 The documentation of change that occurs within the organizational setting or the social/physical environment as a direct result of integrating the new program can help to determine if the new venture is truly feasible.

  • Expansion. This focus examines the potential success of an already-successful intervention with a different population or in a different setting.

  • Limited-efficacy testing. Many feasibility studies are designed to test an intervention in a limited way. Such tests may be conducted in a convenience sample, with intermediate rather than final outcomes, with shorter follow-up periods, or with limited statistical power.

Green and Glasgow9 have pointed out the incongruity between increasing demands for evidence-based practice and the fact that most evidence-based recommendations for behavioral interventions are derived from highly controlled efficacy trials. The highly controlled nature of efficacy research is good in that it is likely more possible to draw causal inferences from the designs used (often randomized trials). But this focus on internal validity can reduce external relevance, and generalizability can decrease, limiting dissemination. Practitioners call for more studies to be conducted in settings where community constraints, for example, are prioritized over optimal conditions and settings—specifically testing the fit of interventions in real-world settings. Feasibility studies should be especially useful in helping to fill this important gap in the research literature, and new criteria and measures have been proposed (e.g., Reach, Efficacy/Effectiveness, Adoption, Implementation, Maintenance [RE-AIM]) to evaluate the relevant outcomes.10

To ensure that feasibility studies indeed reflect the realities of community and practice settings, it is essential that practitioners and community members be involved in meaningful ways in conceptualizing and designing feasibility research. Adhering to published principles of community-based participatory research11, 12 should help in this regard, with the added benefit of helping to determine whether interventions are truly acceptable to their intended audience.

Section snippets

Design Options for Feasibility Studies

The choice of an optimal research design depends upon the selected area of focus. This premise holds equally for feasibility studies and for other kinds of research. As the knowledge base and needs for an intervention progress, different questions come to the fore. In the initial phase of developing an intervention, Can it work? is usually the main question. Given some evidence that a treatment might work, the next question is generally Does it work?, and does it do so under ideal or actual

Conclusion

This article identifies the construct feasibility as a series of questions and methods. For an intervention to be worthy of testing for efficacy, it must address the relevant questions within feasibility. It is also important to discard or modify those interventions that do not seem to be feasible as a result of data collected during the feasibility-study phase. Using feasibility research in the intervention-research process as a determinant for accepting or discarding an intervention approach

References (13)

  • L.W. Green et al.

    Health education planning: a diagnostic approach

    (1980)
  • L.K. Bartholomew et al.

    Planning health promotion programs: an intervention mapping approach

    (2006)
  • P. Greenwald et al.

    The scientific approach to cancer control

    CA Cancer J Clin

    (1984)
  • D. Gustafsen

    Reducing the digital divide for low-income women with breast cancer: a feasibility study of a population based intervention

    J Community Health

    (2005)
  • L. Burhansstipanov et al.

    Lessons learned from community-based participatory research in Indian Country

    Cancer Control

    (2005)
  • L. Green et al.

    Public health asks of systems science: to advance our evidence-based practice, can you help us get more practice based evidence?

    Am J Public Health

    (2006)
There are more references available in the full text version of this article.

Cited by (2050)

  • Massage therapy for hospital-based nurses: A proof-of-concept study

    2024, Complementary Therapies in Clinical Practice
View all citing articles on Scopus
View full text