Response rates are a very popular topic throughout the survey research profession. At a basic level, the response rate indicates the percentage of persons (or sampled units) that participate in a research project out of all of those who are asked to participate. Response rates are used by researchers in order to understand one of the principal sources of error that can bias research results: survey non-response.

Background
Sampling theory posits that data collected from a randomly selected sample may be projected onto a larger population. Thus, an opinion poll of 1,000 individuals selected randomly from an appropriate source may represent the opinions of all Americans within a certain margin of error.  However, when one or more of the respondents selected to participate cannot (or will not) be interviewed, the quality of the data collected may be jeopardized. Response rates are considered an important indicator of data quality since they (and other non-response metrics) describe the percentage of respondents who could/would not be interviewed.

Response rates are not directly linked to data quality as many in the research profession had thought originally (and many still do think). In conceptual terms, bias does not come from the amount of non-response (or percentage of non-responders), it comes from situations where the types of people who participate (or do not participate) are not independent from the variables the project is designed to study.*[1]  Put more simply, data quality is affected if, and when, non-responders are uniquely different from respondents in terms of the data the researcher is collecting. If this is the case, the survey is said to suffer from non-response bias.

How Big is the Problem of Survey Non-response?
MRA conducts research aimed at uncovering information about survey non-response with the Research Profession Tracking Study (RPTS). Most recently, KL Communications used an online survey to host the project for MRA. To calculate a response rate statistic, the project utilized data from 117 telephone, mail and in-person projects*[2] and found an average response rate of about 21 percent. This calculation used a very basic response rate formula that does not consider ineligible persons in order to simplify data input and improve total accuracy.*[3]

2007 Response Rates

The current, and past iterations, of the project (started in 1999) have gauged response rates between 17-23%. Interpretation of these findings is somewhat clouded due to important changes in the design of the project over time. Since the study includes several important limitations, the findings are not intended to be projected throughout the survey research profession, but the results do offer an insightful look into the performance of numerous professional survey research projects.

Managing Survey Non-Response
There are currently several ways concerned researchers may opt to tackle the issue of survey non-response. In his 2006 Public Opinion Quarterly article[4], Robert M. Groves reviews the popular models for assessing non-response bias in surveys. The techniques are summarized below:

1)       Examine Response Rates of Subgroups

Explanation: The researcher examines response rates from different subgroups (e.g. demographics).  Since these factors are often included as explanatory variables in research, different response rates among these subgroups may indicate non-response bias. If differing response rates are found, they are either determined to be unimportant or corrected by weighting.

Major Weakness: Other factors than those represented by subgroups induce non-response bias.

2)       Append and Analyze Additional Variables

Explanation: The researcher utilizes external databases or rich sampling frame information to append to respondent and non-respondent cases alike.  The researcher then compares these data to yield clues as to whether bias exists in the data collected in the survey based upon differences between responders and non-responders and the possible relationship of those differences to survey variables.

Major Weakness: The available information may not be sufficient to draw conclusions about the variables the researcher is studying with his/her survey.

3)       Compare to Similar External Statistics

Explanation: The researcher compares estimates from their own survey to those of a very high quality survey (e.g. US Census) to make comparisons of survey variables.

Major Weakness: The measurement of the survey variables being compared may be different between the surveys and there may be few meaningful variables available for comparison.

4)       Examine According to Response/Non-response Subgroups

Explanation: The researcher examines the respondents to the survey in subgroups created according to how much effort was utilized in interviewing them.  Those who were easiest to interview (e.g. responded on first call attempt, email notification, etc.) are compared to more difficult-to-interview respondent subgroups (e.g. those who required multiple follow-ups).  Theoretically those who were more difficult to interview more closely mimic the characteristics and qualities of non-respondents.

Major Weakness: The technique has not been shown to produce meaningful information in assessing non-response bias.

5)       Compare Weighting Schemes

Explanation: The researcher compares their un-weighted survey data to various weighting schemes to determine how much of an impact the different adjustments make on the survey variables. The weighting schemes may include weighting-class adjustments, post-stratification, imputed adjustments, and/or combinations of these approaches.

Major Weakness: The comparisons the researcher makes are all based on estimation as opposed to external, validated information.

Each of the techniques described suffers from substantial limitations. Therefore, Groves suggests using a combination of the analytical techniques for the strongest analysis.  Survey non-response is a topic of great concern and interest in the survey research profession. Moreover, several promising lines of research being pursued by practitioners and academic researchers now aim at developing a greater understanding with respect to survey non-response. Readers of this document who are in a position to perform non-response investigations of their own work are encouraged to do so. Further, openly publishing non-proprietary information will greatly assist the survey research profession in advancing solutions to these issues.

[1] For more information about non-response bias, see a special edition of Public Opinion Quarterly.

[2] Study design details for the 2006 Research Profession Tracking Study have changed with each iteration. In 2007, the project relied upon a database of MRA members and professional contacts for participation. Additionally, a random sample of institutional researchers drawn from a list of US Colleges and Universities was included to expand the breadth of research surveys represented in the project results. Data was collected between 10/02/2007-12/11/2007. The total sample size was 299.

[3] Response Rates calculated as total number of completes divided by pieces of sample that were attempted to be contacted. This formula changed throughout the various iterations of the project.

[4] Groves, Robert M. 2006. “Nonresponse Rates and Nonresponse Bias in Household Surveys.” Public Opinion Quarterly 70:646-675