Archive

_________________________________________________________________________________
Document Name: Resource Manual for Customer Surveys Part 4
Date: 10/01/93
Owner: OMB
_________________________________________________________________________________
Title: Resource Manual for Customer Surveys Part 4

Author: OMB

Date: October 1993

Status: RO

4. SOME FURTHER CONSIDERATIONS

Most of the 12 steps that are described in Section 3 are relevant

to all surveys, and if you have conducted surveys on other

topics, they are familiar to you. In conducting customer

surveys, however, you may encounter some issues that you have not

faced before, even if you have conducted many surveys.

This section contains short descriptions of these problems, in an

attempt to forewarn you of potential difficulties as you progress

through the survey process. Because of the need to prepare this

manual quickly, there has been time to discuss only a few of

these even briefly (see Appendix). Future updates of the manual

will have additional appendices discussing these and other issues

in greater depth.

Repeated Surveys of Similar Design Facilitate Measures of Change

in Levels of Satisfaction

The real leap in information about your agency's relationships

with its customers comes by seeing how customer satisfaction

changes over time in response to management decisions.

Measurement of change in levels over time involves sample designs

that are compatible at both times, a questionnaire that has

measures that are comparable, levels of participation in the

survey that are similar, and statistical analyses that directly

measure change in the same statistics. All of these issues, on

first experience, contain knotty problems. Many of the problems

involve finding techniques that allow you to improve the quality

of the second survey by learning from your mistakes in the first,

while retaining design features that make the two surveys

comparable. Researchers who are experienced in using surveys to

measure change can help you with these issues.

Building Customer Lists is Difficult for Some Agencies

Agencies that don't have direct contact with their customers may

have difficulty even identifying their customers. They may find

that because there are intermediaries between the agency and the

ultimate customers, the agency has no natural way of contacting

them. This is both a conceptual problem and a logistical

problem. You have to decide how important to the agency are the

customers who are difficult to enumerate. If you include them in

the target population, you may have to develop ways to measure

their satisfaction that are unusual. Here survey methodologists

with long experience in doing surveys on populations without

conveniently available sampling frames can be helpful.

Using Multiple Questions to Measure Satisfaction is Important

Because satisfaction held by a customer cannot be observed

objectively but is an internalized state containing several

components, most private sector customer surveys ask the customer

many different questions, all of which are viewed as slightly

different indicators of the same overall concept of satisfaction.

This "multiple indicator" approach has been found to improve the

reliability of satisfaction levels measured by surveys. Indeed,

most models of measurement suggest that the more questions used

to measure satisfaction, the more stable or reliable the results

will be. Two problems arise from a multiple indicator approach:

first, how many different questions should be used to measure

satisfaction (with each added question, the length of the

questionnaire increases) and how do you combine statistically the

various questions into a useful measure of satisfaction?

Statisticians, especially those experienced in attitudinal

surveys, scale construction, and multivariate modeling with

multiple indicators, can be helpful in guiding decisions on these

points.

Respondents Tend to Answer Positively to

Customer Satisfaction Questions

Private sector customer surveys often find that customers tend to

overstate their levels of satisfaction. This produces skewed

distributions for satisfaction measures, with the vast majority

of respondents giving positive ratings. Analysis on skewed

variables cannot use traditional normal distribution theories for

statistical inference to the full customer population. You need

to use techniques that are sensitive to these distributional

issues.

The tendency to overreport satisfaction also produces a problem

for the measurement of change in satisfaction levels over time.

Because of the tendency to overestimate positive sentiments, it

becomes increasingly difficult for levels of customer

satisfaction to show large increases over time. A law of

diminishing returns affects estimates of change. Private sector

researchers have found that this issue is ameliorated somewhat by

use of many different questions on satisfaction that vary in

their tendency to achieve very high ratings (the multiple

indicator approach mentioned above).

A related phenomenon to the tendency to answer positively is the

problem of a "halo" effect, whereby customers answer each

individual question by giving their overall impression of the

agency rather than assessing the particular attribute of the

product or service measured by a question. This produces

inflated correlations among different satisfaction items, that

decreases the value of any one measure.

There are Many Choices for Response Scales in Satisfaction

Questions

Some customer surveys ask the respondents to report their

satisfaction on a 5 point scale with each point labelled (e.g.,

"Not at all satisfied" to "Very satisfied"). Others ask the

respondent to use a scale from 0 to 10, with 0 meaning

"Completely dissatisfied" and 10 "Completely satisfied." Others

use 7 point scales, or 100 point scales. The two decisions, on

the number of scale points to use and how to label the points,

affect the answers the respondents give and the subsequent

overall measures of satisfaction.

There is a large literature on how to construct attitude scales

that can alert you to how to avoid unintended biasing of

responses (see Bibliography). No matter what response scale is

chosen, however, you cannot safely use the answers from a single

question on, say, a 7 point scale, as if it were a simple count

of some uniform "satisfaction unit." The mean rating on a single

7 point scale, for example, is likely to have very low

reliability (another reason to use the techniques of scale

construction and multiple indicator analysis mentioned above).

High Response Rates for Customer Surveys Improve the

Credibility and Usefulness of Results

When customers sampled in a satisfaction survey do not

participate in the survey, the survey results are threatened.

This is particularly troublesome when nonresponse is higher among

certain types of customers (e.g., if those who are mildly

positive to the agency choose not to respond but those who are

very dissatisfied or very satisfied do respond). The results of

the survey may give a very distorted picture of agency

performance among its current customers. No survey achieves a

100% participation rate, but efforts to assure that customers in

different important groups (e.g., by frequency of contact with

the agency, by demographic characteristics, by type of

product/service used) participate at the same rate are important.

In doing this, use non-threatening follow-up and

"respondent-friendly" solicitation of participation (both for

ongoing relationships with valued customers and to avoid

distortion of answers to the satisfaction questions).

New Developments in Customer Surveys Are Occurring Rapidly

Methods to craft satisfaction measurements, conduct the surveys,

and analyze the results are undergoing rapid developments, as the

private and academic sectors learn how to improve techniques.

Continuous improvements in your agency's customer surveys will

require your staff to keep up with these developments. Updates

of this manual will highlight those developments and training

opportunities on survey techniques can instruct your staff in how

to implement new methodologies.

5. EFFICIENTLY MANAGING THE

REVIEW OF SURVEY PLANS

5.1 Public and Private Sector Surveys

The primary differences between government surveys and private

sector surveys are the standards and the oversight process

imposed by the Paperwork Reduction Act (PRA). In the private

sector, decisions concerning surveys are made by individual

entrepreneurs and any specific discipline imposed on the decision

process varies considerably within the overall discipline imposed

by the marketplace ("what the market will bear").

The discipline imposed by the PRA tends to limit the number

and/or size of surveys and to set certain thresholds on the

quality of survey design and implementation (depending on their

use). This discipline is enforced by agency and OMB review prior

to data collection, and by close scrutiny of many results after

the fact by the General Accounting Office, the Congress, and the

general public.

This discipline has produced some important advantages --

significantly higher response rates often are achieved in

government surveys -- but the oversight requirements have, on

occasion, tested the ability of the Federal government to make

surveys a responsive tool for public policy. Without some

attention to assuring an efficient review process, PRA review

could become a significant barrier to the rapid development of

customer surveys. This section spells out several proven methods

to make the process more efficient and describes a new option

developed to support surveys called for by E.O. 12862.

5.2 Delegation

Congress provided one mechanism in the PRA to streamline the

review process -- the assumption of more substantial review

responsibilities by individual agencies through the delegation

authorized in the Act. This mechanism has not been attractive to

most agencies for several reasons:

- The duplication and coordination objectives of the PRA are

difficult to achieve outside the centralized environment of

OMB.

- Agencies have found it difficult to justify the commitment of

scarce resources for an independent and sometimes highly

technical in-house review.

- Congress designed the PRA delegation process with several

difficult hurdles (specific determinations and a Notice and

Comment rulemaking procedure) to protect the public's

interests in minimizing burden.

5.3 Less Difficult Alternatives

In cooperation with Federal agencies, OMB has devised other

methods to streamline the review process. The most successful of

these devices have been "bundled," "contingency," and "generic"

clearances. Each of these approaches has been in use for more

than a decade, but they have been continuously refined under the

PRA.

- The bundled clearance involves combining many similar data

collections into a single clearance package. Such bundled

packages have been negotiated with agencies in cases where

combined review reduced demands on both OMB and agency

resources (where similar data collections present common

clearance issues).

- A contingency clearance is an approved plan for a data

collection that is justified by specific events -- the plan

is approved in advance of the events and can be implemented

immediately if those events take place, e.g., a survey to

track consequences of a strike. Advance review and approval

permits agencies to respond quickly to the need for data.

- Generic clearance also involves advance approval, but of a

well-defined class of low-burden data collections that are

not fully documented until they are actually used. A

generic clearance typically includes a set of agreements

negotiated between the sponsoring agency and OMB, covering

limitations on methods and usage, a burden cap, a periodic

reporting requirement to update the OMB Docket, and a

commitment by OMB to review any specific application

quickly.

Many excellent customer surveys have been reviewed and approved

through the existing clearance process (some examples are

mentioned in the NPR report). However, OMB expects a substantial

increase in the number of customer surveys sponsored by Federal

agencies within the next few years, as a result of Executive

Order 12862.

In order to make customer surveys more responsive tools for

agency management, OMB has proposed several types of generic

clearance to expedite the data collection tasks involved in a

program of customer surveys.

Each agency subject to the PRA has a senior agency official and

associated staff responsible for functions specified in the PRA,

including internal review of clearance requests and coordination

with OMB. These resources are available to assist you in

preparing proposals for generic clearance and any other aspects

of PRA review.

The remainder of this section describes generic clearance models

that may be useful for particular types of studies needed at

various stages of the customer satisfaction measurement process.

5.4 Generic Clearance for Qualitative Studies

This model has been used by the Census Bureau for questionnaire

development and testing, by the Bureau of Labor Statistics for

cognitive laboratory experiments, and by the Internal Revenue

Service for a program of focus groups. The terms of the

agreements worked out with these agencies have proven workable

for both the agencies and OMB. The terms include --

- A burden cap. The agency proposes a total annual burden

that will be imposed by studies conducted under the generic

clearance. Individual applications are tracked against the

burden cap.

- Specified methods. The agency proposes the type(s) of data

collection(s) to be performed and the method(s) to be used,

with particular attention to those features and commitments

that assure consistency with the guidelines of the Paperwork

Rule (5 CFR 1320).

- A periodic reporting requirement. This allows tracking of

performance relative to the burden cap and updates the

public docket by demonstrating actual results achieved. The

frequency of such reports is negotiable.

- Quick-turnaround OMB review of specific applications. The

agency submits information on each specific application to

update the public docket prior to each actual data

collection. OMB agrees to a quick- turnaround review of each

submission (this varies from same day in the simplest cases

to two weeks in more complex cases.)

5.5 Generic Clearance for Quantitative Surveys

Quantitative surveys usually must meet more stringent standards

than qualitative studies and are more likely to be tailored to a

specific task. For these reasons, the models for generic

clearance are fewer and vary from agency to agency. Two examples

that have been operating for several years were developed with

the National Park Service and the Internal Revenue Service. Both

examples include a burden cap as described above, but they differ

in most other details.

The National Park Service model is built around a catalog of

tested questions covering a broad spectrum of issues involved in

operating a national park. The clearance also includes several

approved methods (sample designs) for administering the questions

to respondents. The components of this scheme were developed

with considerable effort and extensive consultation with OMB to

provide flexibility to the local managers of national parks.

Using this tool, managers can quickly assemble surveys in "kit"

form to address current problems and charge the reporting burden

against the burden cap of the generic clearance.

The IRS model supports the agency's program of customer

satisfaction measurement. It stipulates specific methods,

including professional design, adequate follow-up, and a

commitment to high response, that assure high quality statistics.

The model covers opinion questions only and includes steps to

ensure that response is perceived as purely voluntary. The other

features are identical to the qualitative clearance model

described above. (In fact, both qualitative and quantitative data

collections are managed in a single generic clearance.)

5.6 Simplified Generic Clearance for Voluntary Customer Surveys

In response to the recommendations of the National Performance

Review, OMB has developed a new simplified generic clearance

model specifically for voluntary customer surveys. This

simplified approach is possible because the conditions proposed

by NPR eliminate many of the issues that might otherwise require

a more extensive review.

This form of generic clearance is available only for strictly

voluntary collections of opinion information from clients who

have experience with the program that is the subject of each data

collection.

This option may not be used, for example --

- by regulatory agencies to survey regulated entities;

- in any situation where the respondent may perceive risks to

his interests, either through potential penalties or loss of

benefits;

- for collecting factual information (other than simple

identifying information, where needed); or

- for collecting data from the general public.

Surveys of former customers or discouraged customers may well be

useful for discovering sources of customer dissatisfaction, but

such surveys may also involve some difficult statistical issues

(e.g., the adequacy of coverage) that require more extensive

review; hence, they are not included here.

Agency proposals for this simplified generic clearance should

include a description of the kinds of customer surveys the

clearance will cover, as well as the agency program(s) they will

address. The clearance request should cite the authority of

Executive Order 12862, and request a three year expiration date

(since these data collection programs are expected to support a

process of repeated measurement).

The request should propose a maximum number of burden hours (per

year) against which burden will be charged for each survey

actually used. It should also include an arrangement to submit a

brief summary of objectives, specific burden estimates, and all

final or near final survey instruments (focus group scripts, test

questions, etc.) covered by the generic clearance for inclusion

in the OMB public docket prior to their use.

The proposal should specify an adequate internal review process

to ensure that individual applications are consistent with the

PRA, the Paperwork Rule, and the terms of the generic clearance.

This requires qualified reviewers who are independent of the

sponsoring programs. Review by a professional statistician may be

needed in some cases (e.g., if the generic clearance will include

quantitative surveys). This review must also assure that

material submitted for the public docket is accurate, timely, and

complete.

Finally, the agency should propose an appropriate progress

reporting schedule (e.g., at one year intervals) for summarizing

actual burden, reporting results achieved, and addressing any
Navigation Bar For NPR site Back To The NPR Main Page Search the NPR Site NPR Initiatives Links to Other Reinvention Web Sites Reinvention Tools Frequently Asked Questions NPR Speeches NPR News Releases