Bạn đang xem bản rút gọn của tài liệu. Xem và tải ngay bản đầy đủ của tài liệu tại đây (21.9 MB, 497 trang )
288
Chapter 10 • Determining the Size of a Sample
experience every time they shop at Target. This special
shopping experience is enhanced by Target’s “intuitive”
department arrangements. For example, toys are next to
sporting goods. Another shopping experience feature is the
“racetrack” or extra wide center aisle that helps shoppers
navigate the store easily and quickly. A third feature is the
aesthetic appearance of its shelves, product displays, and
seasonal specials. Naturally, Target continuously monitors
the opinions and satisfaction levels of its customers because competitors are constantly trying to outperform Target and/or customer preferences change.
Target management has committed to an annual survey of 1,000 customers to determine these very issues and
to provide for a constant tracking and forecasting system
of customers’ opinions. The survey will include customers
of Target’s competitors such as Walmart, Kmart, and Sears.
In other words, the population under study is all consumers
who shop in mass merchandise stores in Target’s geographic
markets. The marketing research project director has decided on the use of a telephone survey to be conducted by a
national telephone survey data collection company, and he is
currently working with Survey Sampling, Inc., to purchase
the telephone numbers of consumers residing in Target’s
metropolitan target markets. SSI personnel have informed
him of the basic formula they use to determine the number of
telephone numbers needed. (You learned about this formula
in the chapter-opening vignette featuring Jessica Smith.)
Region
Working Rate
Incidence
Completion Rate
North
The formula is as follows:
Telephone numbers needed = completed Interviews/
(working phone rate
× incidence
× completion rate)
where
working phone rate = percent of telephone numbers
that are “live”
incidence = percentage of those reached that will take
part in the survey
completion rate = percentage of those willing to take
part in the survey that actually complete the survey
As a matter of convenience, Target identifies four
different regions that are roughly equal in sales volume:
North, South, East, and West.
1. With a desired final sample size of 250 for each region,
what is the lowest total number of telephone numbers
that should be purchased for each region?
2. With a desired final sample size of 250 for each region,
what is the highest total number of telephone numbers
that should be purchased for each region?
3. What is the lowest and highest total number of telephone numbers to be purchased for the entire survey?
South
East
West
Low
High
Low
High
Low
High
70%
65%
50%
75%
70%
70%
60%
70%
50%
65%
80%
60%
65%
65%
80%
75%
75%
90%
Low
50%
40%
60%
High
60%
50%
70%
case 10.2 integrated case
Global Motors
Nick Thomas, CEO of Global Motors, has agreed with Cory
Rogers of CMG Research to use an online survey to assess
consumer demand for new energy-efficient car models. In
particular, the decision has been made to purchase panel
access, meaning that the online survey will be completed
by individuals who have joined the ranks of the panel data
company and agreed to periodically answer surveys online.
While these individuals are compensated by their panel
companies, the companies claim that their panel members
are highly representative of the general population. Also,
because the panel members have provided extensive information about themselves such as demographics, lifestyles,
and product ownership, which is stored in the panel company data banks, a client can purchase this data without the
necessity of asking these questions on its survey.
Cory’s CMG Research team has done some investigation and has concluded that several panel companies can
provide a representative sample of American households.
review QueStionS/appliCationS
Among these are Knowledge Networks, e-Rewards, and
Survey Sampling International, and their costs and services seem comparable: for a “blended” online survey of
about 25 questions, the cost is roughly $10 per complete
response. “Blended” means a combination of stored database information and answers to online survey questions.
Thus, the costs of these panel company services are based
on the number of respondents, and each company will bid
on the work based on the nature and size of the sample.
Cory knows his Global Motors client is operating under two constraints. First, ZEN Motors top management
has agreed to a total cost for all of the research, and it is
up to Nick Thomas to spend this budget prudently. If a
large portion of the budget is expended on a single activity, such as paying for an online panel sample, there is
less available for other research activities. Second, Cory
Rogers knows from his extensive experience with clients that both Nick Thomas and ZEN Motors’ top management will expect this project to have a large sample
size. Of course, as a marketing researcher, Cory realizes
289
that large sample sizes are generally not required from a
sample error standpoint, but he must be prepared to respond to questions, reservations, or objections from Nick
or ZEN Motors managers when the sample size is proposed. As preparation for the possible need to convince
top management that CMG’s recommendation is the right
decision for the sample size for the Global Motors survey,
Cory decides to make a table that specifies sample error
and cost of the sample.
For each of the following possible sample sizes listed
below, calculate the associated expected cost of the panel
sample and the sample error.
1.
2.
3.
4.
5.
6.
20,000
10,000
5,000
2,500
1,000
500
CHAPTER
11
Learning Objectives
• Tolearnabouttotalerrorand
hownonsamplingerrorisrelated
toit
Dealing with Field Work
and Data Quality Issues
Dealing with Survey Data Quality
Data quality is a major concern for market-
• Tounderstandthesourcesof
datacollectionerrorsandhowto
minimizethem
ing researchers, and data quality issues vary
by the method of data collection. We asked
Steven H. Gittelman, President and CEO of
• Tolearnaboutthevarioustypes
ofnonresponseerrorandhow
tocalculateresponserateto
measurenonresponseerror
on data quality.
Telephone data collection requires pro-
• Tobecomeacquaintedwithdata
qualityerrorsandhowtohandle
them
Steven H. Gittelman,
President and CEO
of Sample Source
Auditors™.
“Where We are“
levels. Monitors listen to a percentage of respondents to verify that in-
1 Establish the need for marketing
research.
2 Define the problem.
3 Establish research objectives.
4 Determine research design.
5 Identify information types and
sources.
6 Determine methods of accessing
data.
7 Design data collection forms.
8 Determine the sample plan and size.
Sample Source Auditors, to provide comments
9 Collect data.
10 Analyze data.
11 Prepare and present the final
research report.
cess rigor. Interviewers not only need to be
trained on the protocols inherent in data collection, they must be supervised at several
terviewers are capturing an accurate rendition of the information they
provide. Often monitors serve as part of the validation process in that
they are hearing the survey conducted live and thus can offer witness to
its accuracy. However, not all interviews can be monitored in real time,
and many require a phone call to the respondent in an attempt to validate 17% (one in six completes) of each interviewer’s nightly work. In the
event some respondents cannot pass a four- or five-question validation
questionnaire, all of the interviews collected by an interviewer should be
called in an attempt to validate 100% of the work he or she performs.
Clearly the emphasis on telephone quality control can be mitigated by proper training. In those cases where an interviewer fails to
pass monitoring or validation quality processes, he/she should be sent
back to training for a refresher. Often it is good practice to have interviewers hear their own work as recorded in real time and, when possible, to listen to the work of others so that they can become better
acclimated.
Online research is another matter. The safety net that exists in telephone research, due to the probabilistic properties of random digit dialed (RDD) samples, does not exist in online. The absence of a reliable
sample frame changes the protocol drastically, as does the difference in the medium. Online respondents are not supported
by a live interviewer and thus cannot be corrected in real time by
the presence of another human being. Instead, a large variety
of tools are evolving to capture “satisficers” (those respondents
who are poorly engaged and provide little thought to their answers) in real time and also post hoc. The answers of people
who are just trying to speed through a survey so that they can collect an incentive are not only
meaningless but dangerous to the data analysis process. There is a growing body of evidence
Text and images:
By permission, Steven H.
Gittelman, Sample Source
Auditors™.
that poorly engaged respondents do not enter random information but instead are directional in
their responses. If this is so, then they are not only entering data that are useless but, because of
its predilection to being positive in response, tend to bias the interpretation of the data at hand.
Some of the tools available to the online researcher identify speeding (either through
the entire survey or in sections), straight lining, failure at trap questions, inconsistencies, and
answers that are considered so rare as to be impossible. These tools may also facilitate a
general analysis of outliers. All these processes capture those who are poorly engaged but
fail to deal with the forces that drive respondents to become less engaged. The structure of
questionnaires, too many grids, poor wording, excessive length, repetitiveness, and uninteresting subject material contribute to the loss of engagement in respondents. However, some
sources of respondents, such as respondents from social media, are less engaged than others
and generate different behavioral arrays in their responses.
To correct for the inconsistency of responses, some are now advocating that the differences in behavior represented between sources must be corrected in online research. Various means of creating behaviorally representative samples are being tested. In some cases,
blending of different sources to achieve a behavioral mix that represents the population are
being tried. At this point the challenges in obtaining a behaviorally representative sample, at
least one as good as having a probabilistic sample frame like RDD, have not been resolved.
T
his chapter deals with data collection issues, including factors that
affect the quality of data obtained
by surveys. There are two kinds of errors
in survey research. The first is sampling
error, which arises from the fact that
we have taken a sample. Those sources
of error were discussed in the previous
Photo: Kurhan/Fotolia
291
292
Chapter 11 • Dealing with FielD work anD Data Quality issues
chapter. Error also arises from a respondent who does not listen carefully to the question or
from an interviewer who is almost burned out from listening to answering machines or having
prospective respondents hang up. This second type of error is called nonsampling error. This
chapter discusses the sources of nonsampling errors, along with suggestions on how marketing researchers can minimize the negative effect of each type of error. We also address how to
calculate the response rate to measure the amount of nonresponse error. We relate what a researcher looks for in preliminary questionnaire screening after the survey has been completed
to spot respondents whose answers may exhibit bias, such as always responding positively or
negatively to questions.
Data Collection and Nonsampling Error
Nonsampling error is
defined as all errors in a
survey except those due
to the sample plan and
sample size.
Data collection has the
potential to greatly
increase the amount of
nonsampling error in a
survey.
In the two previous chapters, you learned that the sample plan and sample size are important in predetermining the amount of sampling error you will experience. The significance of understanding sampling is that we can control sampling error.1 The counterpart
to sampling error is nonsampling error, which is defined as all errors in a survey except
those attributable to the sample plan and sample size. Nonsampling error includes (1) all
types of nonresponse error, (2) data gathering errors, (3) data handling errors, (4) data
analysis errors, and (5) interpretation errors. It also includes errors in problem definition
and question wording—everything, in fact, other than sampling error. Generally, there
is great potential for large nonsampling error to occur during the data collection stage,
so we discuss errors that can occur during this stage at some length. Data collection is the
phase of the marketing research process during which respondents provide their answers
or information to inquiries posed to them by the researcher. These inquiries may be direct
questions asked by a live, face-to-face interviewer; they may be posed over the telephone;
they may be administered by the respondent alone such as with an online survey; or they
may take some other form of solicitation the researcher has decided to use. Because nonsampling error cannot be measured by a formula as sampling error can, we describe the
various controls that can be imposed on the data collection process to minimize the effects
of nonsampling error.2
Possible Errors in Field Data Collection
Nonsampling errors are
committed by fieldworkers
and respondents.
A wide variety of nonsampling errors can occur during data collection. We divide these
errors into two general types and further specify errors within each general type. The first
general type is fieldworker error, defined as errors committed by the individuals who
administer questionnaires, typically interviewers.3 The quality of fieldworkers can vary
dramatically depending on the researcher’s resources and the circumstances of the survey, but it is important to keep in mind that fieldworker error can occur with professional
data collection workers as well as with do-it-yourselfers. Of course, the potential for
fieldworker error is less with professionals than with first-timers or part-timers. The other
general type is respondent error, which refers to errors on the part of the respondent.
These, of course, can occur regardless of the method of data collection, but some data
collection methods have greater potential for respondent error than others. Within each
general type, we identify two classes of error: intentional errors, or errors that are committed deliberately, and unintentional errors, or errors that occur without willful intent.4
Table 11.1 lists the various errors/types of errors described in this section under each of
the four headings. In the early sections of this chapter, we will describe these data collection errors, and, later, we will discuss the standard controls marketing researchers employ
to minimize these errors.
possible errors in FielD Data ColleCtion
Table 11.1
293
Data Collection Errors Can Occur with Fieldworkers
or Respondents
Intentional Errors
Unintentional Errors
Fieldworker
• Cheating
• Leadingrespondents
• Interviewercharacteristics
• Misunderstandings
• Fatigue
respondent
• Falsehoods
• Nonresponse
• Misunderstanding
• Guessing
• Attentionloss
• Distractions
• Fatigue
intentiOnaL FieLdWOrker errOrs
Intentional fieldworker errors occur whenever a data collection person willfully violates the
data collection requirements set forth by the researcher. We describe two variations of intentional fieldworker errors: interviewer cheating and leading the respondent. Both are constant
concerns of all researchers.
Interviewer cheating occurs when the interviewer intentionally misrepresents respondents. You might think to yourself, “What would induce an interviewer to intentionally falsify
responses?” The cause is often found in the compensation system.5 Interviewers may work by
the hour, but a common compensation system is to reward them by completed interviews. That
is, a telephone interviewer or a mall-intercept interviewer may be paid at a rate of $7.50 per
completed interview, so at the end of an interview day, he or she simply turns in the “completed”
questionnaires (or data files, if the interviewer uses a laptop, tablet, or PDA system), and the
number is credited to the interviewer. Or the interviewers may cheat by interviewing someone
who is convenient instead of a person designated by the sampling plan. Again, the by-completedinterview compensation may provide the incentive for this type of cheating.6 At the same time,
most interviewers are not full-time employees,7 and their integrity may be diminished as a result.
You might ask, “Wouldn’t changing the compensation system for interviewers fix this
problem?” There is some defensible logic for a paid-by-completion compensation system.
Interviewers do not always work like production-line workers. With mall intercepts, for instance, there are periods of inactivity, depending on mall shopper flow and respondent qualification requirements. Telephone interviewers are often instructed to call only during a small
number of “prime time” hours in the evening, or they may be waiting for periods of time to
satisfy the number of call-backs policy for a particular survey. Also, as you may already know,
the compensation levels for fieldworkers are low, the hours are long, and the work is frustrating at times.8 As a result, the temptation to turn in bogus completed questionnaires is certainly
present, and some interviewers give in to this temptation. With marketing research in developing countries, interviewer cheating is an especially troublesome, as you will learn when you
read Marketing Research Insight 11.1, which describes why interviewer cheating occurred in
a study conducted in Zimbabwe.
The second error that we are categorizing as intentional on the part of the interviewer is
leading the respondent, or attempting to influence the respondent’s answers through wording, voice inflection, or body language. In the worst case, the interviewer may actually reword
a question so that it is leading. For instance, consider the question, “Is conserving electricity
a concern for you?” An interviewer can influence the respondent by changing the question to
“Isn’t conserving electricity a concern for you?”
There are other, less obvious instances of leading the respondent. One way is to subtly
signal the type of response that is expected. You may want to reread Marketing Research Insight 8.3
Interviewer cheating is a
concern, especially when
compensation is based on
a per-completion basis.
Interviewers should not
influence respondents’
answers.
294
Chapter 11 • Dealing with FielD work anD Data Quality issues
Marketing research insight 11.1
Global Application
Interviewer Cheating in Zimbabwe
Anyone performing marketing research in developing countries
soon realizes that the communication systems on which researchers in developed countries rely on heavily are not usable.
In countries such as Zimbabwe, computer ownership is low, telephone systems are primitive, and even the postal system is undependable. Consequently, personal interviewers are often hired to
perform the data collection function. Researchers investigating
various aspects of entrepreneurs in Zimbabwe relied exclusively
on hired, personal interviewers to gather their data.9 They discovered that three out of the five interviewers turned in fabricated
interviews; thus, about 60% of the collected data was bogus.
The researchers were astonished at this occurrence because the
interviewers had been carefully selected, and they had undergone comprehensive training. Amazingly, one interviewer turned
in faked interviews even after being told that his predecessor had
been caught cheating and had been sent to jail!
The researchers reflected on the special circumstances of doing
marketing research in a developing country and came up with the
following explanations for interviewer cheating in this situation.
1. Cheating is normative. In an impoverished country such
as Zimbabwe, citizens take every opportunity to get along
or ahead, and being honest can hinder one’s short-run
opportunities. The cheating interviewers were just doing
“business as usual.”
2. Cheating is the fault of the researcher. Global researchers
in circumstances such as this are often aloof and culturally distant from the “hired-locally” interviewers, and the
interviewers may not be informed of the nature or importance of the research. They are just given a list of do’s and
don’ts without any supervision in the field, so they are
likely to “cut corners” with faked interviews and expenses.
3. There are monetary and psychological rewards to cheating. On the monetary side, the cheating interviewer is
paid for bogus interviews and given a travel allowance
that can be pocketed. On the psychological side, the
cheating interviewer feels that he or she has cleverly
tricked the foreign researchers, and he or she may even
boast about cheating.
What about the threat of being thrown in jail? When a
“good” interviewer was asked why other interviewers might
have cheated, he indicated that none of them were convinced that the first cheating interviewer was ever sent to jail.
Active Learning
What Type of Cheater Are You?
Students who read about the cheating error we have just described are sometimes skeptical
that such cheating goes on. However, if you are a “typical” college student, you probably
have cheated to some degree in your academic experience. Surprised? Take the following
test, and circle “Yes” or “No” under the “I have done this” heading for each statement.
Statement
I have done this.
Asking about the content of an exam from someone who has taken it
Yes
No
Giving information about the content of an exam to someone who has not yet
taken it
Yes
No
Before taking an exam, looking at a copy that was not supposed to be available
to students
Yes
No
Allowing another to see exam answers
Yes
No
No
Copying off another’s exam
Yes
Turning in work done by someone else as one’s own
Yes
No
Having information programmed into a calculator during an exam
Yes
No
Using a false excuse to delay an exam or paper
Yes
No
Using exam crib notes
Yes
No
Passing answers during an exam
Yes
No
Working with others on an individual project
Yes
No
Padding a bibliography
Yes
No
possible errors in FielD Data ColleCtion
295
If you checked circled “Yes” to more than half of these practices, you are consistent with most
business students who have answered a variation of this test.10 Now, if you and the majority of
univeristy students in general are cheating on examinations and assignments, don’t you think that
interviewers who may be in financially tight situations are tempted to cheat on their interviews?
on page 219 that describes various ways types of leading questions. For instance, if a respondent says “yes” in response to a question, the interviewer might say, “I thought you would say
‘yes’ as over 90% of my respondents have agreed on this issue.” A comment such as this plants
a seed in the respondent’s head that he or she should continue to agree with the majority.
Another area of subtle leading occurs in interviewers’ cues. In personal interviews, for
instance, interviewers might ever so slightly shake their heads “no” to questions they disagree
with and nod “yes” to those they agree with while posing the question. Respondents may
perceive these cues and begin responding in the expected manner signaled by interviewers’
nonverbal cues. Over the telephone, interviewers might give verbal cues such as “unhuh”
to responses they disagree with or “okay” to responses they agrees with, and this continued
reaction pattern may subtly influence respondents’ answers. Again, we have categorized this
example as an intentional error because professional interviewers are trained to avoid them,
and if they commit them, they should be aware of their violations.
UnintentiOnaL FieLdWOrker errOrs
Unintentional interviewer
An unintentional interviewer error occurs whenever an interviewer commits an error while errors include
misunderstandings and
believing that he or she is performing correctly.11 There are three general sources of uninten- fatigue.
tional interviewer errors: interviewer personal characteristics, interviewer misunderstandings, and interviewer fatigue. Unintentional interviewer error is found in the
interviewer’s personal characteristics such as accent, sex, and demeanor. It
has been shown that under some circumstances, the interviewer’s voice,12
gender,13 or lack of experience14 can be a source of bias. In fact, just the presence of an interviewer, regardless of personal characteristics, may be a source
of bias.
Interviewer misunderstanding occurs when an interviewer believes he
or she knows how to administer a survey but instead does it incorrectly. As
we have described, a questionnaire may include various types of instructions
for the interviewer, a variety of response scale types, directions on how to
record responses, and other complicated guidelines to which the interviewer
must adhere. As you can guess, there is often a considerable education gap between marketing researchers who design questionnaires and interviewers who
administer them. This gap can easily become a communication problem in
which the instructions on the questionnaire are confusing to the interviewer.
Interviewer experience cannot overcome poor questionnaire instructions.15
When instructions are hard to understand, the interviewer will usually struggle to comply with the researcher’s wishes but may fail to do so.16
The third type of unintentional interviewer error pertains to fatiguerelated mistakes, which can occur when an interviewer becomes tired. You
may be surprised that fatigue can enter into asking questions and recording
Personal characteristics such as
answers, because these tasks are not physically demanding, but interviewing
appearance, dress, or accent, although
is labor-intensive17 and can become tedious and monotonous. It is repetitious
unintentional, may cause field worker
at best, and it is especially demanding when respondents are uncooperative.
errors.
Toward the end of a long interviewing day, the interviewer may be less mentally alert than earlier in the day, and this condition can cause slip-ups and
Photo: Warren Goldswain/Shutterstock
296
Chapter 11 • Dealing with FielD work anD Data Quality issues
mistakes to occur. The interviewer may fail to obey a skip pattern, might forget to make note
of the respondent’s reply to a question, might hurry through a section of the questionnaire, or
might appear or sound weary to a potential respondent who refuses to take part in the survey
as a result.
Sometimes respondents
do not tell the truth.
Nonresponse is defined
as failure on the part of a
prospective respondent to
take part in a survey or to
answer a question.
To
learn
about
nonresponse,
launch
www.youtube.com, and search
for “Nonresponse - AAPOR
2008: Robert Groves.”
Unintentional respondent
errors may result from
misunderstanding,
guessing, attention loss,
distractions, and fatigue.
Sometimes a respondent
will answer without
understanding the
question.
Whenever a respondent
guesses, error is likely.
intentiOnaL respOndent errOrs
Intentional respondent errors occur when respondents willfully misrepresent themselves
in surveys. There are at least two major intentional respondent errors that require discussion:
falsehoods and refusals. Falsehoods occur when respondents fail to tell the truth in surveys.
They may feel embarrassed, they might want to protect their privacy, or they may even suspect
that the interviewer has a hidden agenda such as turning the interview into a sales pitch.18 Certain topics denote greater potential for misrepresentation. For instance, personal income level is
a sensitive topic for many people, marital status disclosure is a concern for women living alone,
age is a delicate topic for some, and personal hygiene questions may offend some respondents.
Alternatively, respondents may become bored, deem the interview process burdensome, or
find the interviewer irritating. For a variety of reasons, they may want to end the interview in
a hurry. Falsehoods may be motivated by a desire on the part of the respondent to deceive, or
they may be mindless responses uttered just to complete the interview as quickly as possible.
The second type of intentional respondent error is nonresponse, which we have referred
to at various times in this textbook. Nonresponse includes a failure on the part of a prospective respondent to take part in the survey, premature termination of the interview, or refusals to
answer specific questions on the questionnaire. In fact, nonresponse of various types is probably the most common intentional respondent error that researchers encounter. Some observers believe that survey research is facing tough times ahead because of a growing distaste for
survey participation, increasingly busy schedules, and a desire for privacy.19 By one estimate,
the refusal rate of U.S. consumers is almost 50%.20 Telephone surveyors are most concerned.21
While most agree that declining cooperation rates present a major threat to the industry, 22
some believe the problem is not as severe as many think.23 Nonresponse in general, and refusals in particular, are encountered in virtually every survey conducted. Business-to-business
(B2B) marketing research is even more challenging, presenting additional hurdles that must
be cleared (such as negotiating “gatekeepers”) just to find the right person to take part in the
survey. We devote an entire section to nonresponse error in a following section of this chapter.
UnintentiOnaL respOndent errOrs
An unintentional respondent error occurs whenever a respondent gives a response that is
not valid, but that he or she believes is the truth. There are five instances of unintentional
respondent errors: misunderstanding, guessing, attention loss, distractions, and fatigue. First,
respondent misunderstanding is defined as situations in which a respondent gives an answer without comprehending the question and/or the accompanying instructions. Potential respondent misunderstandings exist in all surveys. Such misunderstandings range from simple
errors, such as checking two responses to a question when only one is called for, to complex
errors, such as misunderstanding terminology.24 For example, a respondent may think in terms
of net income for the past year rather than income before taxes as desired by the researcher.
Any number of misunderstandings such as these can plague a survey.
A second form of unintentional respondent error is guessing, in which a respondent gives
an answer when he or she is uncertain of its accuracy. Occasionally, respondents are asked
about topics about which they have little knowledge or recall, but they feel compelled to
provide an answer to the questions being posed. Respondents might guess the answer, and
all guesses are likely to contain errors. Here is an example of guessing: If you were asked to
estimate the amount of electricity in kilowatt hours you used last month, how many would
you say you used?
possible errors in FielD Data ColleCtion
A third unintentional respondent error occurs when a respondent’s interest
in the survey wanes, known as attention loss. The typical respondent is not as
excited about the survey as is the researcher, and some respondents will find
themselves less and less motivated to take part in the survey as they work their
way through the questionnaire. With attention loss, respondents do not attend
carefully to questions, they issue superficial and perhaps mindless answers, and
they may refuse to continue taking part in the survey.
Fourth, distractions, such as interruptions, may occur while the questionnaire administration takes place. For example, during a mall-intercept interview,
a respondent may be distracted when an acquaintance walks by and says hello.
A parent answering questions on the telephone might have to attend to a fussy
toddler, or an online survey respondent might be prompted that an email message
has just arrived. A distraction may cause the respondent to get “off track” or otherwise not take the survey as seriously as is desired by the researcher.
Fifth, unintentional respondent error can take the form of respondent fatigue, in which the respondent becomes tired of participating in the survey.
Whenever a respondent tires of a survey, deliberation and reflection will diminish. Exasperation will mount and cooperation will decrease. The respondent
might even opt for the “no opinion” response category just as a means of quickly
finishing the survey because he or she has grown tired of answering questions.
297
Guesses are a form of unintentional
respondent error.
Photo: East/Shutterstock
Active Learning
®
What Type of Error Is It?
It is sometimes confusing to students when they first read about intentional and unintentional
errors and the attribution of errors to interviewers or respondents. To help you learn and remember these various types of data collection errors, see if you can correctly identify the type
for each of the following data collection situations. Place an “X” in the cell that corresponds to
the type of error that pertains to the situation.
Interviewer Error
Situation
A respondent says “No opinion”
to every question asked.
When a mall intercept interviewer
is suffering from a bad cold, few
people want to take the survey.
Because a telephone respondent
has an incoming call, he asks
his wife to take the phone and
answer the rest of the interviewer’s questions.
A respondent grumbles about
doing the survey, so an interviewer decides to skip asking the
demographic questions.
A respondent who lost her job
gives her last year’s income level
rather than the much lower one
she will earn for this year.
Intentional
Unintentional
Respondent Error
Intentional
Unintentional
spss student
assistant:
Red Lobster: Recoding
and Computing
Variables
298
Chapter 11 • Dealing with FielD work anD Data Quality issues
Field Data Collection Quality Controls
Precautions and procedures can be implemented to minimize the effects of the various types
of errors just described. Please note that we said “minimize” and not “eliminate,” as the potential for error always exists. However, by instituting the following controls, a researcher can
be assured that the nonsampling error factor involved with data collection will be diminished.
The field data collection quality controls we describe are listed in Table 11.2.
How to Control Data-Collection Errors
error types
Intentional fieldworker errors
Cheating
Leadingrespondent
Control Mechanisms
f
Unintentional fieldworker errors
Interviewer characteristics
f
Misunderstandings
f
Table 11.2
f
Fatigue
Supervision
Validation
Selection and training of interviewers
Orientation sessions and role playing
Require breaks and alternative surveys
Intentional respondent errors
Ensuring anonymity and confidentiality
Incentives
µ
Validation checks
Third-person technique
Falsehoods
Nonresponse
µ
Unintentional respondent errors
f
Misunderstandings
f
Guessing
µ
Attention loss
Distractions
Fatigue
f
Intentional fieldworker
error can be controlled
with supervision and
validation procedures.
cOntrOL OF intentiOnaL FieLdWOrker errOr
Two general strategies—supervision and validation—can be employed to guard against cases
in which the interviewer might intentionally commit an error.25 Supervision uses administrators to oversee the work of field data collection workers.26 Most centralized telephone interviewing companies have a “listening in” capability that the supervisor can use to tap into and
monitor any interviewer’s line during an interview. (At this point you may want to reread the
comments made by Steve Gittelman in the opening vignette to this chapter.) Even though they
have been told that the interview “may be monitored for quality control,” the respondent and
the interviewer may be unaware of the monitoring, so the “listening in” samples a representative interview performed by that interviewer. The monitoring may be performed on a recording of the interview rather than in real time. If the interviewer is leading or unduly influencing
respondents, this procedure will spot the violation, and the supervisor can take corrective
action such as reprimanding that interviewer. With personal interviews, the supervisor might
accompany an interviewer to observe that interviewer while administering a questionnaire in
the field. Because “listening in” without the consent of the respondent could be considered a
Ensuring anonymity and confidentiality
Incentives
Third-person technique
Well-drafted questionnaire
Direct questions
Well-drafted questionnaire
Response options, e.g., “unsure”
Reversal of scale endpoints
Prompters