Archive
Whatever the purposes of any public program, its logic and internal consistency can be exposed and analyzed using objective rational criteria. The following is a set of criteria which could be applied systematically when designing or redesigning programs.[Endnote 1] Research and development to further the field certainly will refine and augment this list.
1. Identify Who Benefits and How They Benefit
All program designs require clear, explicit identification of the primary beneficiaries and the specific benefits they can expect. Without this foundation, no rational program design methodology can be sustained. Programs designed to solve particular problems should be consistent with the missions of the agencies responsible. However, both the problems addressed and the missions pursued should always be framed in the context of who benefits.
Prioritize Beneficiaries. The public purposes and program goals must be defined in terms of benefits to individuals and/or groups. There are two levels of beneficiaries, primary and secondary, which must be reflected in a program's design.
Primary beneficiaries benefit directly from a program. To illustrate, the Department of Veterans Affairs serves veterans; the programs it administers clearly define categories of veterans who qualify as beneficiaries. Other programs' intended primary beneficiaries are not as clear. Consider government programs allowing removal of trees from federal lands. In this latter case the class of primary beneficiaries may be the loggers--who would otherwise be unemployed--the logging companies, the domestic and foreign timber consumers, or the recreational users who object to despoiling the environment. Secondary beneficiaries are those individuals, groups of individuals, or organizations benefiting indirectly from a program. For example, state governments are the secondary beneficiaries of federal grants for education.
Finally, a class of interested parties, not strictly beneficiaries of a program, are stakeholders, such as taxpayers. The taxpayers deserve the most effective and efficient design possible to maximize the return on their investment in tax dollars.
Validate the Need. It is essential to verify that the need exists and will persist. The program designers must be able to document that the need is more than a short-term aberration. The determination of who benefits and how they benefit should be validated by the intended beneficiaries themselves, which is the essence of customer-driven government. Federal-level involvement must also be justified. The program must avoid addressing obsolete problems or needs that are more effectively solved through state or local initiatives, through the private sector, or by nonprofit organizations.
Determine Program Size and Scope. Program designers must also determine the size and scope of the population to be served so the course of action matches the problem. How many people are affected? What are the eligibility criteria? What are the characteristics of the affected populations (geography, socio-economic factors, etc.)? What level of diversity exists among the populations affected? What is the minimum program size necessary to achieve the results required?
If there is more than one problem to be addressed, the program designers must identify each need and prioritize solutions to avoid unbalanced or skewed designs. Social Security is an example of a program that has well-defined beneficiaries and missions. It is designed to provide income to individuals (and their eligible family members) who are retired or disabled. The Social Security Administration has long viewed its mission as the right check, to the right person, in the correct amount, on time. Clear eligibility criteria enable reliable estimates of client populations and size of benefits.
Inspire Public Confidence. Program design should give the program credibility with the public. There is a common perception that the political process does not always place program priorities on the appropriate beneficiaries. Without strong public confidence, programs are hindered from gaining approval and overcoming resistance during implementation. A federal program should appear to make sense both in intent and by design as a valid commitment of public resources.
2. Define and Evaluate Alternate Methods of Program Delivery
After clearly defining program goals and beneficiaries, the designers should consider any alternate forms of program delivery. Not all public needs must be satisfied by direct federal delivery of services. Alternatives to direct government service include regulatory requirements, government sponsored enterprises, tax incentives, government guarantees, monitoring and enforcement, etc.
Different program delivery methods can often attain similar ends, although costs and schedules are not likely to be the same. Education is a prime example. Education has been delivered in many forms such as direct cash payments to veterans, categorical grants to states to fund educational initiatives at the local level, block grants, tax exemptions, privatization, etc.Two sources of potential alternatives are the list of 36 governmental mechanisms that were identified by Osborne and Gaebler in Reinventing Government, and the program tools identified by Professor Lester Salamon of Johns Hopkins University.[Endnote 2] Salamon suggests that the major tools of public action should be studied in a well-structured framework:
Given the great proliferation of instruments that the public sector uses, and the likelihood that this proliferation will continue as a result of the privatization movement and the resource constraints under which government is operating, the development of a body of knowledge that can help guide the selection of tools and instruct program managers about the consequences of these choices seems increasingly worthwhile.[Endnote 3]
Review Against Assumptions. Alternatives must be reviewed explicitly against fundamental assumptions. What are the critical success factors identified by the beneficiaries? How has success been measured and how well does each program design address the needs identified? What is each design's ability to serve people, to adapt to emerging technology, and to use existing facilities? What are the possible unintended consequences or adverse effects of each design mechanism? How do they compare to the status quo? Examples of programs which merit review against their initial assumptions include job training, veterans health care, and sundry entitlement programs.
3. Examine Program Compatibility
It is rare that any new program is so independent and autonomous that it enters a domain totally unpopulated by preexisting efforts. Consequently, program design should be mindful of compatibility and complementarity with existing infrastructure or related programs.
Comparative Advantage. What does this program do that is unique? Would another program or program design perform these functions better? What would beneficiaries lose if this program did not exist?
Complementarity. Does the program complement other programs that serve the same beneficiaries? Does the program needlessly duplicate or overlap other programs at the federal, state, or local levels? Does the program foster cooperation or does it create barriers between beneficiaries and administrators? To illustrate, one-stop shopping, which provides multiple services conveniently at one place, contributes to integrated program delivery and is being considered by the Department of Labor for workforce development.[Endnote 4]
Harmony. Is the program consistent with the mission of the agency which would administer it? Does it create conflict in the agency? Is the program more compatible with another agency's mission? Are another agency's programs more interrelated and would that agency therefore be a better fit? How easy is it to add the program to an existing organizational structure? Will it replace one or more existing programs? Does it create or reduce redundancies?
Catalyst. Does the program foster competition or does it encourage monopolies? Does it serve as a catalyst? Does the program design result in the empowerment of people and communities so they have a vested interest in how the program is run? Is there positive leverage with other programs or destructive interference? Does the sum of all programs encourage the right behavior or do perverse incentives motivate dysfunctional behavior?
4. Assess Cost-Effectiveness and Efficiency
Program designers should apply cost-effectiveness and efficiency criteria to the program as a whole and to specific program elements (including organizational structures, program delivery, and administrative support functions) to estimate:
--cost (total, marginal, and/or administrative, as a measure of overhead) per unit of benefit (defined as program outputs or outcomes); and
--productivity (unit of benefit per full time employee or field office).
Program designers also should consider the program's social efficiency; i.e., how the program might be designed to achieve the greatest net benefit to society as a whole. In this context, benefits and costs are defined more inclusively than in analyzing cost-effectiveness, although there probably will be some overlap. To determine efficiency, analysts would employ the standard techniques of benefit-cost analysis to estimate the net social benefits of one or more program designs. Steps include:
--identifying major relevant impacts of the policy, including direct and indirect impacts on the program's direct beneficiaries, other affected groups, and society as a whole;
--categorizing costs and benefits for various affected groups; and
--quantifying dollar impacts.
In principle, the program design which is estimated to achieve the highest net benefit should be selected. However, optimizing distributional benefits (to either primary or secondary beneficiaries) typically conflicts with optimizing aggregate social welfare. Some times this may be appropriate (i.e., targeted programs). At other times some might criticize the selection of sub-optimally efficient programs as pork-barrel spending. Making explicit the trade-offs inherent in each design can help inform the debate.
5. Evaluate Consistency with Accepted Management Principles
While the field of management is not a precise science, it does represent an accumulation of knowledge useful in illuminating the strengths and weaknesses of public programs. Using generally accepted management principles to evaluate various program designs may help identify both inherent deficiencies and alternative remedies. Where violations of conventional wisdom are appropriate, the justifications should be clear and convincing.
Among the numerous management principles applicable to program design to be given particular emphasis are those consistent with the core values of reinventing government, namely:
--serving customers,
--empowering employees,
--empowering communities to solve their own problems, and
--fostering excellence.
For example, program designers should consider the following reinvention principles that improve the efficiency and effectiveness of their programs:
--create a clear sense of mission,
--steer more, row less,
--delegate authority and responsibility,
--replace regulations with incentives,
--develop budgets based on outcomes,
--inject competition where appropriate,
--search for market, not administrative, solutions, and
--measure success by customer satisfaction.
6. Ensure Financial Feasibility
Given the increasingly strong drive to reduce the federal budget deficit, and competing claims on limited funding, affordability is an inevitable criterion for evaluating new and existing public programs. Although not all programs have direct budgetary impact, many do. Typically, cost calculations and estimates can be difficult to derive. Accounting methods interact with funding streams and auditing requirements in ways that are not always sensible or consistent.
Nevertheless, the program cost must be known if the government is to avoid incurring unlimited liabilities. A program's budgetary costs can be determined in a variety of ways, each of different interest to different constituencies. Among the alternate cost elements that may be relevant are:
--total life cycle costs: the full costs of creating, operating, maintaining, and terminating the program from inception to conclusion (where end points are reasonably defined);
--annual budget requirements: the amount that must be appropriated annually to sustain/maintain the program throughout its lifetime;
--risk of cost overruns: the probability that actual program costs will exceed approved budget levels including the expected cost of unbudgeted contingent liabilities;
--revenue generation: the expected income return to the Treasury from the program's operation; and
--debt management: the ratio of the funds collected or recaptured to the amount owed the U.S. Treasury.
7. Determine Feasibility of Implementation
Even the best designed program will not be effective in its execution if fatal flaws of implementation are not considered during its design. Unfortunately, those persons most familiar with implementation do not usually play key roles in design, depriving policy officials of the opportunity to obtain field validation and assessment before final decisions on design. Without effective implementation, program proponents will not see their goals successfully accomplished.Given the number and types of federal programs, it is unrealistic to formulate universal principles of implementation that apply exactly to all programs. Yet it may be instructive to identify a few common pitfalls to consider in the designs of many types of public service programs.
Foremost, implementation should be embedded in the program design.[Endnote 5] Therefore, the more that programs explicitly consider anticipated difficulties and conditions of implementation, the fewer the surprises and the more likely program success. Although detailed operational plans are not usually developed during the program design phase, at least some consideration in broad outline would be prudent management and politics. However, implementation must not compromise or obscure the program's basic purpose and integrity. If political realities so dilute the program's integrity that the original intent is no longer served, then managers should abort the design rather than succumb to the temptation of just getting anything approved.
Second, it is imperative to identify clearly who is ultimately responsible for the program to anticipate their capacity to administer and to ensure accountability. We cannot know the barriers and impediments to success without knowing clearly who (which organization(s)) is charged with carrying out the program. An objective assessment of the capabilities of the agencies through which the program resources and responsibilities will flow could pinpoint potential roadblocks, sources of conflict, and choke points that might hamper implementation. And, as trite as it sounds, those responsible for implementing a program must know what they are supposed to do. Policies, if not clear, accurate, and consistent, may cause misunderstanding and confusion, and confer discretion that may be roundly abused. For federal programs to be implemented more effectively, we must ensure the following:
(a) those bearing the major responsibility for implementation are aware of what isactually happening in another unit of government, a field agency, or the private sector;
(b) those responsible for implementation earnestly strive to achieve the stated program goals and mission;
(c) policies and actions are decisively made and clearly transmitted; and
(d) decisions and policies encourage creativity, flexibility, and adaptability.
Third, an accurate description of the organizational network directly affected by the program would identify the critical relationships and points of coordination required or implied by program operations. This might include a flow chart of transactions depicting key relationships. Omissions and redundancies are more easily and economically remedied in the planning stages than later. Implementing policies and programs requires a coordination of efforts between various groups and organizations, particularly regarding complex policies or decisions.
Fourth, the timing of critical events is absolutely essential. Although many alternate sequences of activities are possible, key deadlines may not be discretionary (e.g., budget approvals, legislative requirements, awarding of contracts, etc.) and certain schedules offer distinct advantages over others. More programs suffer from overly ambitious timetables than from unduly conservative ones; effective implementation requires sufficient time.
Fifth, in designing programs, attention must be given to resources. However, due to various political compromises and conflicts, sufficient resources often are not available to implement a program properly. Program designers may not be aware of inadequate resources or may choose to ignore the problem. There are several important components of resources critical to implementation, including:
(a) adequate staff with the appropriate knowledge (programmatic and managerial);
(b) sufficient information on how a program or decision is to be implemented and the support/approval of other necessary agencies and participants;
(c) necessary authority to permit the programs or decisions to be implemented; and
(d) facilities and equipment necessary to implement the program or decision.
Sixth, prudent experimentation before committing the government to a major public investment could help avoid expensive misadventures and illuminate the pitfalls in uncharted domains. While pilot demonstrations are not appropriate for all programs, they are highly desirable under a wide range of conditions such as:
(a) high risk programs--where the consequences of ailure are catastrophic or life threatening;
(b) where the economic investment is substantial enough to justify small experiments; (c) where there is significant complexity--either organizational or programmatic (e.g., health care, homelessness, economic renewal);
(d) where there is great uncertainty surrounding the proper solutions;
(e) where there is intense public controversy or low political consensus; and
(f) when there is no immediate urgency that precludes more deliberate exploration of alternate approaches.
8. Provide for Program Flexibility
Perhaps the only universal element in all federal programs is the certainty that even the most carefully formulated plans and policies will not be realized exactly as designed. Given the inordinately long gestation periods for many public programs, coupled with lengthy implementation periods, it may be as long as five years before a program may be mature enough to sustain a valid evaluation.
In the meantime, the conditions and constraints under which a program operates may have changed significantly from when it was first conceptualized. Therefore, the design of programs should permit enough management flexibility to allow agencies to adapt to external changes, unforeseen circumstances, variations in resource levels, and schedule changes. In addition, program designs should free implementing organizations to tailor programs to local circumstances. Where initial rigidity in structure is necessary, waiver authority is essential to accommodate exceptional cases and unanticipated conditions.[Endnote 6]
While no one can anticipate every possible contingency, the most likely changes (e.g., budget erosion, schedule slippage, personnel shifts) should not have catastrophic impacts on well-designed programs. Programs dependent on critical, highly sophisticated, specialized, or scarce resources (e.g., key individuals, unproven technology, rare expertise, unique facilities) are fragile because they are vulnerable to uncertainties in the availability, quality, and quantity of these essential resources. For example, many attempts to modernize automated information systems often under- or over-estimate the availability of state of the art technology, resulting in either investments in obsolete hardware/software or costly delays waiting for technology to catch up.
Two especially critical flexibilities involve schedule and budget. With the enormous vagaries of when and what will survive the approval processes both in the executive and legislative branches, and the uncertainties of program execution, good program designs must withstand significant variations in time (i.e., date of initiation, rates of progress, completion milestones, etc.) and budget availability (both total amount and annual appropriations). Programs should be designed to absorb reasonable schedule changes with minimal impact. Similarly, robust programs should yield acceptable returns on investment over a wide range of program size (i.e., they should be fairly elastic over different scale levels). Programs that are designed as all or nothing versus those that can be bought by the yard are especially brittle in the face of inevitable uncertainties in the appropriations and authorizations processes.
9. Institute Performance Measurement and Program Evaluation
Program designs should incorporate feedback mechanisms enabling managers and policymakers to assess how well a program works in terms of both implementation and achievement of program goals. Of course, program designers first must sufficiently identify the program's goals and specific performance objectives to measure outcomes.
Take, for example, the Customs Service's trade law violation detection program. In fiscal 1991, U.S. Customs did not detect about 84 percent of the estimated trade law violations in imported cargo.[Endnote 7] Does a detection rate of 16 percent indicate success? A strong design should require benchmarks for success/failure and delineate when a program should be redesigned, concluded because successful, or terminated for failure.
Designers, in consultation with potential users of the performance data, should monitor a range of key performance indicators.[Endnote 8] A wide range of types of data could be collected and reported, including a program's:
--inputs (e.g., funding and staff, beneficiary characteristics),
--workload,
--outputs (services/final products),
--outcomes of products or services (i.e., for a job training program, the number ofjob placements that result in a year's employment),
--efficiency and/or productivity,
--beneficiary satisfaction,
--employee satisfaction, and
--service quality and timeliness.
A performance measurement system also requires identified data sources and reliable collection and reporting methods. The value of the information provided should be carefully weighed against the costs of data collection, including the burden imposed on reporting entities (such as state and local governments). Where practical, performance reports should be designed to meet the diverse information needs (in terms of both quantity and particular indicators presented) of the various users of performance data (e.g., beneficiaries, line staff, and program managers, senior agency officials, OMB, etc.). Nonetheless, compromises among users may be unavoidable.
The new Government Performance and Results Act of 1993 will institutionalize performance measurement throughout the federal government. Key elements of this legislation include strategic plans, performance plans, performance reports, managerial flexibility waivers, and performance budgeting. This Act requires a variety of pilot projects which could provide fruitful opportunities for the application of the several program design criteria suggested here.
10. Build in Cessation Provisions
There is a common perception that some government programs and organizations outlive their usefulness yet continue to exist without rational purpose or tangible benefit. Government has often been criticized as being totally incapable of abandoning programs, despite success, failure, or lack of evidence of either. Many government programs do not explicitly provide for their conclusion or termination for any cause whatsoever. A program design which does not address the ultimate fate of the program is at best incomplete, at worst self- perpetuating. Although some programs have no expected expiration (e.g., Social Security), others do possess an ultimate end point (e.g., inoculation programs should cease when the disease is essentially eliminated, as in the case of smallpox). Determination should be made at the onset as to what marks the success or failure of a program. When these benchmarks are established, it will be easier to decide when to redesign a program or just stop.
The Helium Fund Program was created in 1925 to ensure helium supplies for blimps. It has accumulated a 176- year supply of helium along with a $1.4 billion debt. All original objectives (except paying for itself) have been met, and private suppliers could meet all federal needs at lower cost.[Endnote 9] The obvious question is who are the primary beneficiaries of this program now that blimps are not critical to national security or commerce? Moreover, the helium program lacks any cessation provisions, an all too common failing in program design.
Sunset laws have a checkered history.[Endnote 10] While enjoying a surge of popularity at the state level in the 1970s and early 1980s, sunset laws have produced mixed results and many states are repealing ineffective sunset legislation. In January 1993, a sunset bill (S. 186) was reintroduced in the 103d Congress which would require formal reauthorization of federal programs every ten years. Given the amount of controversy around the applicability of sunset requirements to federal agencies, the bill's future is uncertain. Perhaps more promising avenues exist in strengthening reauthorization requirements by incorporating rigorous performance measurements and enforcing appropriate discipline in both the executive and legislative branches of government.
ENDNOTES
1. Five of these criteria are consistent with evaluation criteria proposed by Senator Reid in his bill, S. 186, Spending and Control Programs Evaluation Act of 1993.
2. Osborne, David, and Ted Gaebler, Reinventing Government (Reading, MA: Addison-Wesley Publishing Company, Inc., 1992).
3. Salamon, Lester M., Beyond Privatization: The Tools of Government Action (Washington, D.C.: Urban Institute Press, 1989).
4. See NPR Accompanying Reports, Department of Labor and Department of Health and Human Services (Washington, D.C.: U.S. Government Printing Office, September 1993).
5. "The great problem, as we understand it, is to make the difficulties of implementation a part of the initial formulation of policy. Implementation must not be conceived as a process that takes place after, and independent of, the design of policy. Means and ends can be brought into somewhat closer correspondence only by making each partially dependent on the other.'' Pressman and Wildavsky, Implementation (Berkeley, CA: University of California Press, 1973), p. 143.
6. See NPR Accompanying Reports, Executive Office of the President, Streamlining Management Controls, and Strengthening the Partnership in Intergovernmental Service Delivery .
7. U.S. General Accounting Office, High Risk Series: Managing the Customs Service, HR-93-14 (Washington, D.C.: U.S. General Accounting Office, December 1992).
8. See Osborne and Gaebler, pp. 349-359, which describe several approaches to performance measurement in the public sector. A variety of others are referenced on p. 392.
9. "Helium Agency--Deterrent to Blimp Wars: Congress Must Decide Whether 1920s Program is Worth Saving,'' San Francisco Chronicle (May 21, 1993), p. A5; and "Forget Oil: U.S. Rules Helium Market,'' Washington Post (June 17, 1993), p. A17. See also NPR Accompanying Report, Department of Interior (recommendation DOI13).
10. Kearney, Richard C., "Sunset: A Survey and Analysis of the State Experience,'' Public Administration Review (January/February 1990), pp. 49-57; and Nice, David C., "Sunset Laws and Legislative Vetoes in the States,'' State Government, vol. 58, no. 1 (Spring 1985), pp. 27-32.
Who We Are |||Latest Additions |||Initiatives |||Customer Service |||News Room |||Accomplishments |||Awards |||"How To" Tools |||Library |||Web Links