Archive
_________________________________________________________________________________
Document Name: Resource Manual for Customer Surveys Part 7
Date: 10/01/93
Owner: OMB
_________________________________________________________________________________
Title: Resource Manual for Customer Surveys Part 7
Author: OMB
Date: Oct 1993
Appendix B -- Selected Technical Notes
B.1 Introduction
No general recommendations will be given in this Appendix, except
on focus groups and only then because of the PRA process. One of
the reasons for this is that most of the considerations commented
on will have important dimensions that depend on your unique
circumstances. For example--
- Your existing knowledge of your customers.
- The nature of your product/service delivery.
- How quickly you can modify your agency's approach to
customers.
- The degree of satisfaction/dissatisfaction which exists with
what is currently going on.
This list could be much longer, as you will find in your own
experience.
While in the end your own thoughtful practice will become your
principal guide, some beginning observations may help in
identifying what to look for and where. A few of these are
provided here with others to be added in later versions of this
manual. In particular, private sector and existing government
applications of customer surveys suggest that special attention
should be paid to --
- How you use focus groups and similar methods (B.2),
- Analysis issues in the use of opinion scales (B.3),
- Complete coverage of your targeted customers (B.4),
- Some other issues which supplement Section 4 (B.5).
One of the best ways you may find to deal with these and other
issues that may be unique to your agency is to set aside
resources that will allow you to continuously improve the
measurement process.
B.2 Focus Groups and the PRA
One of the first tasks in developing useful customer surveys is
to determine what the customer perceives to be the important
service or product attributes. Focus groups can be a valuable
tool for eliciting this customer perspective and are widely used
for this purpose in both government and the private sector.
Focus groups fall within the coverage of the Paperwork Reduction
Act and require OMB clearance, but a program of focus groups is a
prime candidate for "generic" clearances described in Section 5.
Focus groups require planning, effort, and resources, just like
any other research method. They involve a recruitment process, a
"script" comprising the questions to be addressed by the group,
and one or more "moderators" to facilitate the participation of
all members of the group and keep the responses focussed on the
target issues. The information collected in a focus group
includes a verbatim record of the discussion (often a video tape)
and may also include comments and analysis. Both the recruitment
and analysis stages are generally time-consuming efforts.
Focus groups sponsored by Federal agencies often involve more
highly selective recruitment than is the norm for private sector
focus groups. Vendors of such services should be advised to
allow for higher recruitment costs (more telephone contacts per
successful recruitment) when this is the case.
Since the success of focus groups depends on full and willing
participation, vendors should be discouraged from using more
aggressive recruitment practices (e.g., "hard sell" or special
incentives) to bring in marginal participants. While OMB rules
restrict the use of cash incentives generally, payments of up to
$25 per participant have been routinely approved as an allowance
for the estimated "out-of-pocket" costs (transportation, child
care, etc.) of a focus group.
Well-trained moderators contribute substantially to the value of
a focus group. The "script" or moderator's guide should clearly
lay out the questions (and follow-up issues to be addressed by
the group). A clear statement of the purposes of the focus
group(s) is also needed to guide the moderator and for the PRA
review process.
In some circumstances formal focus groups may not be needed. For
clients who are both sophisticated and articulate, less formal
approaches (a less structured meeting or group discussion) may
serve very well to elicit customer opinions. These informal
opportunities for customer input require the same attention to
issues and careful selection of participants as are needed for
focus groups but they do not require PRA review.
Other techniques that have been used successfully to explore
customer perceptions are analysis of customer complaints,
"suggestion" boxes, and "mystery shopper" studies (where a person
poses as a client to observe how a service is delivered). The
"mystery shopper" approach was used to identify problems in the
IRS taxpayer assistance program. These other methods also do not
require PRA review and may offer quick and cost effective options
for identifying customer problems in some cases.
While focus groups can be very useful for exploring, specifying,
and understanding customer concerns, they are not useful for
generalization (e.g., quantitative measurements or comparisons).
Focus groups are almost never representative of the entire
customer base.
B.3 Opinion Scales
One of the important tasks in quantifying satisfaction or
dissatisfaction is selecting the measurement scale(s) you will
use. Now this is a very large subject but perhaps an
illustration of some of the issues will be of assistance.
A common strategy, for example, is to present questions about
important aspects of service with Likert scales for responses
(e.g., from "very satisfied" to "very dissatisfied" in some
number of steps.) When numbers are placed on such scales then the
means and variances computed on them do not necessarily have the
same properties as on continuous variables, e.g., age or income.
(One solution might be to use categorical data analysis
techniques.)
The use of ordinal scales has other implications as well.
For example, satisfaction is often viewed as a difference
between expectations and perceived performance, but the use of
an arithmetic difference between two opinion scores as a
measure of satisfaction has raised both conceptual and
statistical controversies in the literature.
Such issues suggest how measurement decisions and analytical
objectives are intertwined. Empirical methods and approaches
have provided means to address problems that cannot be resolved
within classical (mathematical) measurement theory.
In the example above, empirical results indicate that
responses couched in terms of satisfaction may already be
referenced to some implicit concept of expectations held by
the respondent. This, in turn, suggests using this natural
tendency by indicating some explicit reference point (e.g.,
"service that fully meets your needs").
Empirical results abound in the literature concerning opinion
scales and some scale properties are almost always estimated
empirically. Clearly, your judgement, and not some mechanical
manipulation of "the numbers," will be crucial. The bibliography
may help. Consultants are available (see Section 6). Workshops
are also planned (see Section 7).
B.4 Representativeness of the Customer Survey
Representativeness requires good frame construction and careful,
probability-based sample selection. Getting a good response rate
is crucial too.
The risk of significant nonresponse bias (the answers of
nonrespondents would have been different from those who
responded) increases with the amount of nonresponse. Indeed, the
technical consequences of inadequate response, when superimposed
on the other difficulties inherent in customer surveys can
seriously undermine the usefulness of your analyses.
The issues here have both a philosophical and a technical
dimension. Chapter 2 ("Putting Customers First") of the Report
of the National Performance Review states the philosophical
principle:
"We will ensure that all customers have a voice, and that
every voice is heard."
This principle calls for survey designs that make participation
convenient, simple, and free of unnecessary burden or perceived
threats -- in short, the kind of designs that generally produce
high response rates. One caution, however; some techniques
(e.g., incentives and aggressive follow-up) have been shown to
affect the respondent's attitudes toward the sponsoring agency.
(See bibliography).
In surveying customer opinions, we are not so much interested in
where the mean of the distribution lies, but in determining
accurately what portion of the customer population is highly
satisfied or highly dissatisfied (and why.) Nonresponse can be
particularly damaging in interpreting such surveys. For example,
it is not uncommon that the respondents may be either more or
less homogeneous than the full population -- both phenomena have
been observed in different settings. What will happen in your
situation cannot be predicted; however, you should be on the
lookout for this kind of distortion of the distribution.
B.5 Some Other Issues
Customer satisfaction needs to be thought of as multi-
dimensional. For example, much of the usefulness of a survey of
customers may come from the specificity of the insights provided
on agency practice. As already noted in Section 4, on a technical
level, multiple measures should help in reducing the inherently
greater uncertainties in opinion data.
Whatever choices you make, the scales used for opinion research
may set some practical bounds on the precision that can be
achieved and the types of analysis that can be used. Indeed,
opinion results may perform poorly when pressed to measure small
differences or relationships that are not strong. An
understanding of these problems will help avoid overambitious
analytical goals.
Section 3 outlined a step-by-step plan for a single survey design
cycle. It is important to remember that a customer survey is not
an activity to be done just once or once in a while. Much of the
benefit derived from systematically obtaining customer views
depends on a commitment to conduct measurements as frequently as
necessary:
- Qualitative components of the process should be
repeated periodically to stay in touch with changing
customer perceptions and expectations. This will suggest
revisions to quantitative surveys as well.
- The quantitative surveys themselves need to be repeated
regularly, too, with or without changes in the questions
being used.
Since comparisons over time are an important quantitative
objective of the process, your studies (both qualitative and
quantitative) should include some overlap of new and old designs,
in order to calibrate improved measures against prior measures.
This is a common practice when significant changes are made to
major Federal statistical series, and it is even more important
here -- especially in the early going, when you may still be
learning the best ways for your agency to conduct and use
customer surveys.
Ironically, success in increasing customer satisfaction may lead
to a greater difficulty in measuring that satisfaction.
Technically the shape (e.g., skewness) of the distribution may
change as satisfaction grows; also the relationships (e.g.,
degree of collinearity) among the (multiple) satisfaction
measures may also change (as they all tend to go up together).
As your success increases a carefully designed change in your
measurement scheme can open up new information and rejuvenate the
analytical process.
Appendix C -- Case Studies
(to be supplied as available)
Appendix D -- Contracting for Surveys
Statistical Policy Working Paper #9 (supplied separately)
should be inserted after this page.
NOTE: This document is not available electronically at
this time -- it may be released in this form at
a later date.
UPDATES and SUGGESTIONS
If you wish to receive updates and | Feel free to add comments
supplements to this manual, please | here or send them separately
provide your name and address. |
Please include the name of your |
Agency/Department as appropriate |
to facilitate bulk distributions. |
|
Send this information to: |
|
Statistical Policy Office |
OMB/OIRA |
Room 3228, NEOB |
Washington, D.C. 20503 |
or FAX (202) 395-7245 |
|
NAME ____________________________ |
|
ADDRESS _________________________ |
_________________________ |
_________________________ |
|
AGENCY __________________________ |