Archive

Rating the Net

19 Hastings Comm/Ent L.J. 453 (1997)

Jonathan Weinberg[*]

Internet filtering software is hot. Plantiffs in ACLU v. Reno[1] relied heavily on the existence and capabilities of filtering software (also known as blocking software) in arguing that the Communications Decency Act was unconstitutional.[2] President Clinton has pledged to "vigorously support" the development and widespread availability of filtering software.[3] Free speech activists see that software as providing the answer to the dilemma of indecency regulation, making it possible "to reconcile free expression of ideas and appropriate protection for kids."[4] Indeed, some of the strongest supporters of blocking software are First Amendment activists who sharply oppose direct government censorship of the Net.[5]

 Internet filtering software, further, is here. As of this writing, the Platform for Internet Content Selection (PICS) working group has developed a common language for Internet rating systems, making it much easier to create and market such ratings.[6] There are two heavily-promoted ratings systems (SafeSurf and RSACi) allowing content providers to rate their own World Wide Web sites in a sophisticated manner. Microsoft's World Wide Web browser now incorporates a feature called Content Advisor that will block Web sites in accordance with the rules of any PICS-compliant ratings system, including SafeSurf and RSACi.[7] Stand-alone blocking software -- marketed under such trademarks as SurfWatch, Cyber Patrol, CYBERSitter, KinderGuard, Net Nanny and Parental Guidance -- is gaining increasing sophistication and popularity.

 It is easy to understand the acclaim for filtering software. That software can do an impressive job at blocking access, from a given computer, to sexually explicit material that a parent does not wish his or her child to see. The PICS standard for describing ratings systems is an important technical achievement, allowing the development and easy use of a variety of sophisticated ratings schemes.

 In the midst of the general enthusiasm, though, it is worth trying to locate the technology's limitations and drawbacks. Blocking software is a huge step forward in solving the dilemma of sexually explicit speech on the Net, but it does come at a cost. People whose image of the Net is mediated through blocking software will miss out on worthwhile speech -- through deliberate exclusion, through inaccuracies in labeling inherent to the filtering process, and through the restriction of unrated sites.

 In Part I of this Article, I will offer some general background on blocking software. In Part II, I will consider the extent to which inaccuracy is inevitable in rating the Net. It is easy to find anecdotes about sites inappropriately blocked by filtering software, and complaints that ratings systems are insufficiently fine-tuned to label particular sites accurately. Do these bad results reflect a limitation inherent in the nature of ratings systems? In Part III, I will consider the treatment of unrated sites. Relatively few Internet content sources today carry ratings. What portion of the Net universe can we expect to carry ratings once the technology is mature? To the extent that ratings systems' reach is less than complete, what implications does that have for Net speech as a whole? In Part IV, I will examine the extent to which adults' access to the Net is likely, in the near future, to be filtered through blocking software.
 
 

I. BACKGROUND


 

I assume in this Article that the reader is familiar with the Internet; for those who are not, the court's findings of fact in ACLU v. Reno provide an excellent guide.[8] Internet rating services respond to parents' (and governments') concerns about children's accessing sexually explicit material, and other adult content, available on the Net. They have focused greatest attention on children's access to the World Wide Web (WWW or Web). The Web consists of a vast collection of documents, containing text, pictures, sound and/or video, each residing on some computer linked to the Internet. Any Web document may contain links to other Web documents or other Internet resources, so that a user with a Web browser can jump from one document to another with a single mouse click. It is easy for users without sophisticated equipment or expensive Internet connections to create Web pages that are then accessible to any other user with access to the Web. Rating services have also paid special attention to Usenet newsgroups. These allow any user to post text or pictures (or sound, or video) to one of more than 15,000 different open fora, each devoted to a different topic. Worldwide, about 200,000 computer networks participate in the Usenet news system. A small number of Usenet newsgroups are devoted to sexually explicit material.

 It is fairly easy for software to screen access to Usenet news. Since each newsgroup has a name describing its particular topic (such as rec.music.folk, soc.culture.peru or alt.tv.x-files), software writers can do a reasonably effective job of blocking access to sexually explicit material simply by blocking access to those newsgroups (such as alt.sex.stories) whose names indicate that they include sexually explicit material.

 Blocking access to sexually explicit material on the World Wide Web is much more difficult. There are millions of individual pages on the Web, and the number is increasing every day. An astonishingly small fraction of those pages contain sexually explicit material.[9] Every Web page (indeed, every document accessible over the Internet) has a unique address, or "URL,"[10] and the URLs of some Web pages do contain clues as to their subject matter. Since nothing in the structure or syntax of the Web requires Web pages to include labels advertising their content, though, reliably identifying pages with sexually explicit material is not an easy task.

 First-generation blocking software compiled lists of off-limits Web pages through two methods. First, the rating services hired raters to work through individual Web pages by hand, following links to sexually explicit sites, and compiling lists of URLs that were to be deemed off-limits to children. Second, they used string-recognition software to automatically proscribe any Web page that contained a forbidden word (such as "sex" or "xxx") in its URL. The software was not appreciably configurable by home users; once a parent installed the software on his home computer, the question of which sites would be blocked was answered entirely by the ratings service.

 The PICS specifications contemplate that a ratings system can be more sophisticated. A ratings service may rate a document along multiple dimensions: that is, instead of merely rating the document as "adult" or "child-safe," it can give it separate ratings for (say) violence, sex, nudity, and adult language. Further, along any given dimension, the ratings service may choose from any number of values. Instead of simply rating a site "block" or no-block" for violence, a ratings service can assign it (say) a rating of 1 through 10 for increasing amounts of violent content. These features are important because they make possible the creation of filtering software that is customizable by parents. A parent subscribing to such a ratings service, for example, might seek to block only sites rated over 3 for violence and 8 for sex. Finally, the PICS documents note that ratings need not be assigned by the authors of filtering software. They can be assigned by the content creators themselves, or by third parties. One of the consequences of the PICS specifications is that varying groups -- the Christian Coalition, say, or the Boy Scouts -- can seek to establish rating services reflecting their own values, and these ratings can be implemented by off-the-shelf blocking software.[11]

 Most rating services today follow the PICS specifications. Their particular approaches, however, differ. The Recreational Software Advisory Council (RSAC) has developed an Internet rating system called RSACi.[12] Participating content providers rate their own sites along a scale of 0 through 4 on four dimensions: violence, nudity, sex and language. RSAC does not itself market blocking software; instead, it licenses its service to software developers. Another system in which content providers rate their own speech is called Safesurf;[13] in that system, content providers choose from nine values in each of nine categories, from "profanity" through "gambling."[14]

 Rating services associated with individual manufacturers of blocking software include Cyber Patrol, Specs for Kids, and CYBERSitter. Cyber Patrol rates sites along fifteen dimensions, from "violence/profanity" to "alcohol & tobacco," but assigns only two values within each of those categories: CyberNOT and CyberYES.[15] Specs for Kids rates documents along eleven dimensions, including "advertising," "alternative lifestyles," "politics" and "religion," and assigns up to five values (including "no rating") in each of those categories.[16] CYBERSitter, by contrast, maintains a single list of objectionable sites; it affords users no opportunity to block only portions of the list.[17]
 
 

II. ACCURACY

 Since blocking software first came on the market, individual content providers have complained about the ratings given their sites. Not all of those complaints relate to problems inherent to filtering software. For example, some programs tend to block entire directories of Web pages simply because they contain a single "adult" file. That means that large numbers of innocuous Web pages are blocked merely because they are located near some other page with adult content.

[18] Indeed, it appears that some programs block entire domains, including all of the sites hosted by particular Internet service providers.
[19] This is highly annoying to affected content providers. It may be a temporary glitch, though; over time, it's plausible that the most successful rating services will -- properly -- label each document separately.
[20]

 Other problems arise from the wacky antics of string-recognition software.
[21] America Online's software, for example, ever alert for four-letter words embedded in text, refused to let users register from the British town of "Scunthorpe." (The on-line service solved the problem, to its own satisfaction, by advising its customers from that city to pretend they were from "Sconthorpe" instead.)
[22]

 Controversies over sites actually rated by humans are less amenable to technological solution. One dispute arose when Cyber Patrol blocked animal-rights web pages because of images of animal abuse, including syphillis-infected monkeys; Cyber Patrol classed those as "gross depiction" CyberNOTs. (The problem was worse because Cyber Patrol, following the entire-directory approach described above, blocked all of the hundred or so animal welfare, animal rights and vegetarian pages hosted at the Animal Rights Resource Site.) An officer of Envirolink, which had provided the web space, responded: "Animal rights is usually the first step that children take in being involved in the environment. Ignoring companies like Mary Kay that do these things to animals and allowing them to promote themselves like good corporate citizens is a 'gross depiction.'"
[23]

 Sites discussing gay issues are commonly blocked, even if they contain no references to sex. Surfwatch, in its initial distribution, blocked a variety of gay sites including the Queer Resources Directory, an invaluable archive of material on homosexuality in America,
[24] and the International Association of Gay Square Dance Clubs. It responded to protests by unblocking most of the contested sites.
[25] Other blocking programs, on the other hand, still exclude them: Cyber Patrol blocks a mirror of the Queer Resources Directory, along with Usenet newsgroups including clari.news.gays (which carries AP and Reuters dispatches) and alt.journalism.gay-press.
[26] CYBERSitter is perhaps the most likely to block any reference to sexual orientation, forbidding such newsgroups as alt.politics.homosexual. In the words of a CYBERSitter representative: "I wouldn't even care to debate the issues if gay and lesbian issues are suitable for teenagers. . . . We filter anything that has to do with sex. Sexual orientation [is about sex] by virtue of the fact that it has sex in the name."
[27]

 The list of blocked sites is sometimes both surprising and alarming. Cyber Patrol blocks Usenet newsgroups including alt.feminism, soc.feminism, clari.news.women, soc.support.pregnancy.loss, and alt.support.fat-acceptance.
[28] It blocks gun and Second Amendment WWW pages (including one belonging to the NRA Members' Council of Silicon Valley). It blocks the Web site of the League for Programming Freedom (a group opposing software patents). It blocks the Electronic Frontier Foundation's censorship archive.
[29] CYBERSitter blocks the National Organization of Women web site.
[30] It blocks the Web site of Peacefire, a teen-run cyber-rights group, although the site contains no questionable material other than criticism of CYBERSitter.
[31]

 Authors' complaints about ratings are magnified, and made particularly bitter, by the fact that the ratings of third-party labelling services such as Cyber Patrol and CYBERSitter are trade secrets, and not disclosed.
[32] So far as I am aware, no service takes steps to inform content providers of the ratings it assigns to their pages. Content providers can discover their ratings only by buying the various blocking programs and periodically searching for their own sites.
[33]

 You might think that a better answer lies in rating systems, such as RSACi and SafeSurf, in which content providers evaluate their own sites. Surely, you might figure, an author could hardly disagree with a rating he chose himself. The matter, though, is not so clear. When an author evaluates his site in order to gain a rating from any PICS-compliant rating service, he must follow the algorithms and rules of that service. Jonathan Wallace, thus, in an article called "Why I Will Not Rate My Site,"
[34] asks how he is to rate "An Auschwitz Alphabet,"
[35] his powerful and deeply chilling work of reportage on the Holocaust. The work contains descriptions of violence done to camp inmates' sexual organs. A self-rating system, Wallace fears, would likely force him to choose between the unsatisfactory alternatives of labeling the work as suitable for all ages, on the one hand, or "lump[ing it] together with the Hot Nude Women page" on the other.
[36]

 It seems to me that at least some of the rating services' problems in assigning ratings to individual documents are inherent. It is the nature of the process that no ratings system can clasify documents in a perfectly satisfactory manner, and this theoretical inadequacy has important real-world consequences.

 Consider first how a ratings system designer might construct a ratings algorithm. She might provide an algorithm made up entirely of simple, focused questions, in which each question has a relatively easily ascertainable "yes" or "no" answer. (Example: "Does the file contain a photographic image depicting exposed male or female genitalia?") Alternatively, she might seek to afford evaluators more freedom to apply broad, informal, situationally sensitive guidelines so as to capture the overall feel of each site. (Example: "Is the site suitable for a child below the age of 13?")
[37]

 In jurisprudential terms, the first approach relies on "rules"; the second, on "standards."
[38] The RSACi system attempts to be rule-based. In coding its violence levels, for example, to include "harmless conflict; some damage to objects"; "creatures injured or killed; damage to objects, fighting"; "humans injured or killed with small amount of blood"; "humans injured or killed; blood and gore"; and "wanton and gratuitous violence; torture; rape," its designers have striven to devise simple, hard-edged rules, with results turning mechanically on a limited number of facts.
[39]

 Some other rating systems rely much more heavily on standards. The SafeSurf questionnaire, for example, requires the self-rater to determine whether nudity is "artistic" (levels 4 through 6), "erotic" (level 7), "pornographic" (level 8), or "explicit and crude" pornographic (level 9).
[40] The Voluntary Content Rating self-rating system promoted by CYBERSitter is almost the model of a standard; it offers as its only guidance the instructions that self-raters should determine whether their sites are "not suitable for children under the age of 13," and whether they include material "intended for an audience 18 years of age or older."
[41] Specs for Kids has its raters distinguish between sites that refer to homosexuality [1] "[i]mpartial[ly]," or [2] discuss it with "acceptance or approval," or [3] "[a]ctive[ly] promot[e]" it or "attempt[] to recruit the viewer."
[42] Each of these classifications requires more judgment on the part of the evaluator, and is not so hard-edged as the RSACi categories. Folks in this country with different outlooks and values may disagree as to where the lines fall.
[43] With respect to the Specs treatment of references to homosexuality, folks in this country disagree as to whether the categories are even coherent.
[44] These categories work only within a community of shared values, so that evaluators can draw on the same norms and assumptions in applying the value judgments embedded in the standards.

 This distinction follows the more general rules-standards dichotomy in law, which focuses on the instructions that lawmakers give law-appliers in a variety of contexts.
[45] Legal thought teaches that rules and standards each have disadvantages. A problem with standards is that they are less constraining; relatively speaking, a standards-based system will lack consistency and predictability.
[46] Rules become increasingly necessary as the universe of law-appliers becomes larger, less able to rely on shared culture and values as a guide to applying standards in a relatively consistent and coherent way.
[47] One can see a parallel in recent reporting on problems the Yahoo! indexing service faces in seeking to classify ever more WWW sites. Yahoo!'s taxonomy embodies editorial judgments; the job is not amenable to resolution simply through rules. Consistent application of the taxonomy "comes from having the same 20 people classify every site, and by having those people crammed together in the same building where they are constantly engaged in a discussion of what belongs where."
[48] As a result,
 

Yahoo! is faced with an unforgiving trade-off between the size and the quality of its directory. If Yahoo! hires another 50 or 60 classifiers to examine every last site on the Web, the catalog will become less consistent . . . . On the other hand, if Yahoo! stays with a small number of classifiers, the percentage of sites Yahoo! knows about will continue to shrink.

[49]
 
 

It is for this reason that the designers of RSACi attempted to be rule-like. They contemplate that the universe of ratings evaluators will include every content provider on the Web; that group can claim no shared values and culture. To accomodate that heterogeneous group, RSAC offers a rules-based questionnaire that (it hopes) all can understand in a similar manner. This, RSAC explains, will "provide[] fair and consistent ratings by eliminating most of the subjectivity inherent in alternative rating systems."

[50] It seems plain that with a relatively large universe of evaluators -- and it is hard to see how one could seek to map the entire Net without one -- a ratings system relying too heavily on standards just won't work. The dangers of arbitrariness and inconsistency will be too great.
[51]

 Rules, though, have their own problems. They direct law-appliers to treat complex and multifaceted reality acording to an oversimplified schematic.
[52] The point of rules, after all, is that by simplifying an otherwise complex inquiry, they "screen[] off from a decisionmaker factors that a sensitive decisionmaker would otherwise take into account."
[53] They may thus generate results ill-serving the policies behind the rules.
[54] Consider the task of deciding which citizens are mature enough to vote. A rule -- say, that any person can vote if he or she has reached the age of 18 -- has the advantage of administrability, and avoids biased enforcement. Few of us would welcome a system in which a government bureaucrat examined each of us individually to determine whether we were mature enough to vote. Because the rule is much simpler than the reality it seeks to govern, though, it is both over-and under-inclusive; it bars from the franchise some people under 18 who are in fact mature, and grants the franchise to some people over 18 who are not. Rules thus give rise to their own sort of arbitrariness.
[55] At best, a rule-based filtering system will miss nuances; at worst, it will generate absurd results (as when America Online, enforcing a rule forbidding certain words in personal member profiles, barred subscribers from identifying themselves as "breast" cancer survivors).
[56]

 Given this theoretical critique, one might think that the challenge facing ratings system designers is to devise really good rules-based systems, ones that track reality as well as possible, minimizing the difficulties I've noted. That is what RSAC claims to have done in RSACi.
[57] I think the product of any such effort, though, necessarily will be flawed. Over the next few pages, I'll try to explain why.

 Let's return to the choices facing a ratings system designer as she constructs blocking software. So far in this Article, I've glossed over the most basic question confronting her: what sort of material should trigger ratings consequences? Should children have access to material about weapons making?
[58] hate speech?
[59] artistic depictions of nudity?
[60] Again, she can take two different approaches. First, she can decide all such questions herself, so that the home user need only turn the system on and all choices as to what is blocked are already made for him. CYBERSitter adopts this approach.
[61] This has the benefit of simplicity, but seems appropriate only if members of the target audience are in basic agreement with the rating service (and each other) respecting what sort of speech should and should not be blocked.
[62]

 Alternatively, she can leave those questions for the user to answer. The ratings system designer need not decide whether to block Web sites featuring bomb-making recipes, or hate speech. She can instead design the system so that the user has the power to block those sites if he chooses. Microsoft's implementation of the RSACi labels, thus, allows parents to select the levels of adult language, nudity, sex and violence that the browser will let through.
[63] Cyber Patrol allows parents to select which of the twelve CyberNOT categories to block.

 Either approach, though, imposes restrictions on the categories chosen by the ratings system designer. If the system designer wishes to leave substantive choices to parents, she must create categories that correspond to the different sides of the relevant substantive questions. That is, if the designer wishes to leave users the choice whether to block sites featuring hate speech, she must break out sites featuring hate speech into a separate category or categories. If she wishes to leave the user the choice whether to block sites that depict explicit sexual behavior but nonetheless have artistic value, she must categorize those sites differently from those that do not have artistic value.
[64] On the other hand, if the system designer makes those substantive decisions herself, making her own value choices as to what material should and should not be blocked, she must (of course) create categories that correspond to those value choices.

 The problem is that many of these questions cleave on lines defined by standards. Many users, for example, might like to block "pornography," but to allow other, more worthy, speech, even if that speech is sexually explicit. SafeSurf responds to that desire when it requires self-raters to determine whether nudity is "artistic," "erotic," "pornographic," or "explicit and crude" pornographic. It gets high marks for attempting to conform its system to user intuitions, but its lack of rulishness means problems in application.
[65] Similarly, Specs' distinction between "impartial reference," "acceptance or approval," and "active promotion" of homosexuality may well correspond to the intuitions of much of its target audience, but will hardly be straightforward in actual application. The problem increases with the heterogeneity of the service's audience: the more heterogeneous the audience, the more categories a rating system must include, to accomodate different user preferences.

 With this perspective, one can better approciate the limitations of RSAC's attempt to be rule-bound. For one thing, RSACi ignores much content that some other ratings systems class as potentially unsuitable, including speech relating to drug use, alcohol, tobacco, gambling, scatology, computer hacking and software piracy, devil worship, religious cults, militant or extremist groups, weapon making, and tatooing and body piercing, and speech "grossly deficient in civility or behavior."
[66] For many observers (myself included), RSACi's limited scope is good news. Software that blocks access to controversial political speech is not a good thing, even if parents install it voluntarily. My point, though, is that RSACi had to confine its reach if it was to maintain its rule-bounded nature.

 The problem appears as well in connection with the categories RSACi does address. Consider RSACi's treatment of sex. It divides up sexual depictions into "passionate kissing," "clothed sexual touching," "non-explicit sexual activity," and "explicit sexual activity; sex crimes."
[67] But note what RSACi, in contrast to some other ratings systems, does not attempt. It does not seek to distinguish "educational" materials from other depictions, so that users can allow the former but not the latter. It does not seek to distinguish "artistic" depictions from others, so that users can allow the former but not the latter. It does not seek to distinguish "crude" depictions from others, so that users can allow the latter but not the former. There is no way, consistent with rulishness, that it can seek to distinguish the serious or artistic from the titillating. It achieves rule-boundedness, and ease of administration, at the expense of nuance; it achieves consistent labelling, but in categories that do not correspond to the ones many people want.

 In sum, rating system designers face a dilemma. If a ratings service seeks to map the Net in a relatively comprehensive manner, it must rely on a relatively large group of evaluators. Such a group of evaluators can achieve fairness and consistency only if the ratings system uses simple, hard-edged categories relying on a few, easily ascertainable characteristics of each site. Such categories, though, will not categorize the Net along the lines that home users will find most useful, and will not empower those users to heed their own values in deciding what speech should and should not be blocked. To the extent that ratings system designers seek to allow evaluators to consider more factors, in a more situationally specific manner, to capture the essence of each site, they will ensure inconsistency and hidden value choices as the system is applied.
 

III. UNRATED SITES

 Blocking software can work perfectly only if all sites are rated.

[68] Otherwise, the software must either exclude all unrated sites, barring innocuous speech, or allow unrated sites, letting in speech that the user would prefer to exclude. What are the prospects that a rating service will be able to label even a large percentage of the millions of pages on the World Wide Web? What are the consequences if it cannot?
 

Consider first rating services associated with individual manufacturers of blocking software, such as CYBERSitter and Cyber Patrol. These services hire raters to label the entire Web, site by site. The limits on their ability to do so are obvious. For one thing, as the services get bigger, and hire more and more employees to rate sites, their consistency will degrade; that was one of the lessons of Part II of this Article. For another, no service could be big enough to rate the entire Web. Too many new pages come on-line every day. The content associated with any given page is constantly changing.[69] Further, some of the sites most likely to be ephemeral are also among the most likely to carry sexually explicit material. A ratings service simply can't keep tabs on every college freshman who gets to school and puts up a Web page, notwithstanding that college freshmen are of an age to be more interested in dirty pictures than most. A ratings service certainly can't keep tabs on every Web page put up by a college freshman in Osaka, say, or in Amsterdam. So any such rating service must take for granted that there will be a huge number of unrated sites.

 As a practical matter, simply enabling access to all unrated sites is not an option for these rating services; it would let through too much skin for them to be able to market themselves as reliable screeners. Instead, they must offer users other options, dealing with unrated sites in one of two ways. First, they can seek to catch questionable content through string-recognition software. CYBERSitter, for example, offers this option. The problem with this approach, though, is that at least under current technology, string-recognition software simply doesn't work very well. I've already mentioned America Online's travails with the town of Scunthorpe and the word "breast";[70] other examples are easy to find. Surfwatch, for example, blocked a page on the White House web site whose URL -- http://www.whitehouse.gov/WH/kids/html/couples.html -- contains the forbidden word "couples."[71]

The second option is for the rating services simply to block all unrated sites. Industry members seem to contemplate this as the necessary solution. Microsoft, for example, cautions Internet content providers that "[f]or a rating system to be useful, the browser application must deny access to sites that are unrated."[72] Other observers reach the same result.[73]

 What about self-rating approaches, like those of SafeSurf and RSACi? These services have the potential for near-universal reach, since they can draw on the services of an effectively unlimited number of evaluators. While the evaluators will be a diverse group (to say the least), rating service designers can try to cope with that diversity by constructing rule-bound questionnaires. While some evaluators may misrepresent their sites, rating services can try to devise enforcement mechanisms to cope with that as well. On the other hand, self-rating services will not achieve their potential unless content providers have a sufficient incentive to participate in the ratings process in the first place. That incentive is highly uneven.[74]

 Mass-market commercial providers seeking to maximixe their audience reach will participate in any significant self-rating system, so as not to be shut out of homes in which parents have configured their browsers to reject all unrated sites.[75] Many noncommercial site owners, though, may not participate. They may be indifferent to their under-18 visitors, and may not wish to incur the costs of self-rating. It is still early to predict what those costs may be. For the owner of large archives containing many documents, supplying a rating for each page may be a time-consuming pain in the neck.[76] If self-rating services choose to charge content providers a fee to participate, that will provide another disincentive. RSAC's business plan for RSACi contemplates that it will charge a fee to Internet content providers (although not before January 1997), just as it charges a fee to video-game manufacturers who participate in its video-game self-rating system.[77] RSAC, though, has not yet announced any details of its charging plans; a ratings service with hopes of global reach might allow noncommercial sites to rate themselves gratis.

 There may be other disincentives as well. Some content providers may not self-rate because they are philosophically opposed to the censorship the rating system enables,[78] or dissatisfied with the choices the ratings system provides.[79] More generally, why should a college or graduate student with a Web page (say) bother to self-rate? He's not necessarily writing for the benefit of some old fogy with kids and blocking software, and if the fogy excludes him, the author may not much care.

 It may be that the only way to ensure participation in an a self-rating system even in a single country (let alone internationally) would be for government to compel content providers to self-rate (or to compel Internet access providers to require their customers to do so). It's not obvious, though, how such a requirement would work. The drafters of such a law would face the choice of forcing content providers to score their sites with reference to a particular rating system specified in the law, or allowing them leeway to choose one of a variety of PICS-compliant ratings systems. Neither approach seems satisfactory. The first, mandating use of a particular rating system, would freeze technological development by eliminating competitive pressures leading to the introduction and improvement of new searching, filtering and organizing techniques. It would leave consumers unable to choose the rating system that best served their needs. The second would be little better. Some government organ would have to assume the task of certifying particular self-rating systems as adequately singling out material unsuitable for children. It is hard to imagine how that agency could ensure that every approved system yielded ratings that were in fact useful to most parents, while nonetheless maintaining a healthy market and allowing innovation.

 In any event, a mandatory self-rating requirement would likely be held unconstitutional. In Riley v. National Federation of the Blind,80 the Court considered a requirement that professional fundraisers disclose to potential donors the percentage of charitable contributions collected over the previous twelve months that were actually turned over to charity. The Court explained that "mandating speech that a speaker would not otherwise make" necessarily alters the content of the speech, and thus amounts to content-based regulation.[81] Even when a compelled statement is purely factual, the compulsion burdens protected speech and is subject to "exacting" scrutiny, subject to the rule that government cannot "dictate the content of speech absent compelling necessity, and then, only by means precisely tailored."[82]

 The Court repeated that analysis in McIntyre v. Ohio Elections Commission,83 striking down a requirement that persons distributing materials relating to elections state their names and addresses in those materials. The Court explained that the requirement was a "direct regulation of the content of speech," subject to "exacting" scrutiny.[84] Notwithstanding that the compelled disclosure was useful to voters and uncontroversially factual, the state was requiring "that a writer make statements or disclosures she would otherwise omit"; the restriction, accordingly, could not stand unless narrowly tailored to serve an overriding state interest.[85]

 A requirement that Internet content providers provide ratings of their speech falls straightforwardly under the rule of those cases.[86] Even if the characterization of speech according to the taxonomy of a particular rating system were deemed factual and value-neutral, requiring a speaker to characterize her speech in that manner would require her to incorporate into her speech a "statement[] . . . she would otherwise omit." Such a requirement must surmount exacting scrutiny.[87]

 In fact, mandatory self-rating is even more problematic. The Court has repeatedly recognized the impermissibility of requiring a speaker to associate herself with particular ideas she disagrees with.[88] Requiring self-rating does that, because rating is not factual and value-neutral. Mandatory self-rating compels the speaker to associate herself with the values and worldview embodied in the rating taxonomy. The drafters of RSACi, or SafeSurf, may view -- and hence compartmentalize -- the universe of speech in a way I reject. RSACi, for example, classifies sexually explicit speech without regard to its educational value or its crass commercialism; that choice is inconsistent with the values of many. Some taxonomies make the conflict more obvious than others; it would surely offend my values to be required to characterize my speech using Specs for Kids criteria, under which a message that expresses "acceptance" of homosexuality is by definition not "impartial."[89] But any taxonomy incorporates editorial and value judgments.

 Moreover, a self-rating requirement may otherwise chill protected speech. To the extent that rating criteria are less than wholly rule-like, their vagueness will lead Internet content providers to self-censor. Content providers will "steer far wider of the unlawful zone"[90] in order to avoid sanctions for misrating. "Vagueness and the attendant evils . . . are not rendered less objectionable because the regulation of expression is one of classification rather than direct suppression."[91]

 I am doubtful that a self-rating requirement could survive exacting scrutiny. Even without a self-rating requirement, parents can restrict their children's access to sexually explicit sites by using blocking programs, and instructing the software to block all unrated sites. Indeed, the wave of the future may well be Web browser add-ons, marketed by entities such as Disney, that collect a few tens of thousands of Web sites specifically chosen to be kid-friendly and block access to all others. A self-rating requirement would be helpful to parents only in that it would enable them to limit their children's access in such a way that the kids could also view an uncertain number of additional sites, not containing sexually explicit material, whose providers would not otherwise choose to self-rate. In light of the first amendment damage done by a compelled self-rating requirement,[92] accomplishing that goal does not seem to be a compelling or overriding state interest.[93]

 The result, though, is that child-configured lenses will show only a limited, flattened view of the Net. If many Internet content providers decline to self-rate, the only "safe" response may be to configure blocking software to exclude unrated sites.[94] The plausible result? A typical home user, running Microsoft Internet Explorer set to filter using RSACi tags (say), would have a browser configured to accept duly rated mass-market speech from large entertainment corporations, but to block out a substantial amount of quirky, vibrant individual speech from unrated (but child-suitable) sites. This prospect is disturbing.

 The Internet is justly celebrated as "the most participatory form of mass speech yet developed."[95] A person or organization with an Internet hookup can easily disseminate speech across the entire medium at low cost; the resulting "worldwide conversation" features an immense number of speakers and "astonishingly diverse content."[96] As Judge Dalzell noted in his ACLU v. Reno opinion, the Internet vindicates the First Amendment's protection of "the 'individual dignity and choice' that arises from 'putting the decision as to what views shall be voiced largely into the hands of each of us,'" because "every minute[, Internet communication] allows individual citizens actually to make those decisions."[97] But this prospect is threatened if widespread adoption of blocking software ends up removing much of the speech of ordinary citizens, leaving the viewer little to surf but mass-market commercial programming. One hardly needs the Internet for that; we get it already from the conventional media.

 In sum, blocking software could end up blocking access to a significant amount of the individual, idiosyncratic speech that makes the Internet a unique medium of mass communication. Filtering software, touted as a speech-protective technology, may contribute to the flattening of speech on the Net.
 
 

IV. CHILDREN, ADULTS AND BLOCKING SOFTWARE


 

You may protest that I am making much of little here. After all, blocking software is intended to restrict children's access to questionable sites. It won't affect what adults can see on the Net -- or will it? It seems to me that, in important respect, it will. The desire to restrict children's access has spurred the recent development of filtering technology. I'm doubtful, though, that widespread adoption of the software will leave adults unaffected.

 In a variety of contexts, we can expect to see adults reaching the Net through approaches monitored by blocking software. In the home, parents may set up filters at levels appropriate for their children, and not disable them for their own use.[98] Indeed, they may subscribe to an Internet access provider that filters out material at the server level (so that nobody in the household can see "objectionable" sites except by establishing an Internet access account with a new provider).[99] If, as seems likely, future versions of the PICS specifications support the transmission of filtering criteria to search engines, then users running Internet searches will not even know which sites otherwise meeting their criteria were censored by the blocking software.[100]

 Other people get their Internet connections through libraries; indeed, some policymakers tout libraries and other community institutions as the most promising vehicle for ensuring universal access to the Internet.[101] The American Library Association takes the position that libraries should provide unrestricted access to information resources; it characterizes the use of blocking programs as censorship.[102] This policy, however, is not binding on member libraries. It is likely that a significant number of public libraries will install blocking software on their public-access terminals, including terminals intended for use by adults; indeed, some have already done so.[103] As one software vendor warns:

Unlimited Web Access is a Political Nightmare.

Your library may spen[d] tens of thousands of dollars on Internet hardware / training and then be closed down by an angry parent willing to go to the press and the town council because their child saw pornographic materials in the library.[104]
 
 

Still other people get Internet access through their employers. Corporations too, though, wary of risk and wasted work time, may put stringent filters in place. Some large companies worry about the possibility of being cited for sexual harassment by virtue of material that came into the office via the Net.[105] Even more are concerned about sports and leisure information that they feel may detract from business productivity. One consultant sums up the corporate mood: "My kids went out on the Web to a museum and saw great artwork, but I don't want my employees hanging out at the Louvre all day on my nickel."[106]

 In sum, we may see home computers blocked for reasons of convenience, library computers blocked for reasons of politics, and workplace computers blocked for reasons of profit. (Even one university temporarily installed blocking software in its computer labs, in aid of a policy "prohibit[ing] the display in public labs of pornographic material unrelated to educational programs."[107]) The result may be that large amounts of content may end up off-limits to a substantial fraction of the adult population.[108]

 There are limits to this -- sex sells. Many home Internet users will be loathe to cut themselves off from the full range of available speech. Most on-line services and Internet access providers, while attempting to make parents feel secure about their children's exposure to sexually explicit material on the Net, will still host such material for adults who wish to view it.[109] It seems safe to conclude, though, that blocking software will have the practical effect of restricting the access of a substantial number of adults.

 This should affect the way we think about filtering software. Any filtering system necessarily incorporates value judgments about the speech being blocked. These value judgments are not so controversial if we think of the typical user of blocking software as a parent restricting his children's access. It is part of a parent's job, after all, to make value judgments regarding his own child's upbringing. The value judgments are much more controversial, though, if we think of the typical "blockee" as an adult using a library computer, or using a corporate computer after hours. If we are concerned about these users' access to speech, then we need to think hard about the way blocking software works, the extent to which it can be accurate, and the extent to which it is likely to exclude the sort of speech that makes the Internet worthwhile.
 
 

CONCLUSION


 

Across the world, governments and industry are turning to filtering software as the answer to the problem of sexually explicit material on the Internet. In the United Kingdom, service providers and police have endorsed a proposal recommending that Internet service providers require users to rate their own Web pages, and that the providers remove Web pages that their creators have "persistently and deliberately misrated."[110] The European Commission has urged the adoption of similar codes of conduct to ensure "systematic self-rating of content" by all European content providers.[111] Some U.S. companies have been leaning the same way: Compuserve has decided to "encourage" its users and other content providers to self-rate using RSACi.[112]

 Ratings, though, come at a cost. It seems likely that a substantial number of adults, in the near future, will view the Net through filters administered by blocking software. Intermediaries -- employers, libraries and others -- will gain greater control over the things these adults read and see. Sites may be stripped out of the filtered universe because of deliberate political choices on the part of ratings service administrators, and because of inaccuracies inherent in the ratings process. If a ratings service is to categorize a large number of sites, it cannot simultaneously achieve consistency and nuance; the techniques it must rely on to achieve consistency make it more difficult to capture nuance, and make it less likely that users will find the ratings useful. The necessity of excluding unrated sites may flatten speech on the Net, disproportionately excluding speech that was not created by commercial providers for a mass audience.

 This is not to say that ratings are bad. The cost they impose, in return for the comforting feeling that we can avert a threat to our children, is surely much less than that imposed (say) by the Communications Decency Act. Ratings provide an impressive second-best solution. We should not fool ourselves, though, into thinking that they impose no cost at all.


[*] Associate Professor, Wayne State University Law School. I am grateful to Jessica Litman, whose comments greatly improved this Article. An earlier version of this Article was presented at the 1996 Telecommunications Policy Research Conference; I am indebted to the participants in that conference, in particular Paul Resnick and Rivkah Sass, for their perspectives.

[1] 929 F.Supp. 824 (E.D. Pa. 1996) (striking down the Communications Decency Act).

[2] See ALA Plaintiffs' Post-Hearing Brief, American Civil Liberties Union v. Reno, 929 F.Supp. 824 (E.D. Pa. 1996) (No. 96-963), at Parts I.C.1 & I.C.2 (available at <http://www.eff.org/pub/Legal/Cases/EFF_ACLU_v_DoJ/960429_ala_post-hearing.brief>).

[3] Statement by the President (June 12, 1996), <http://www.eff.org/pub/Legal/Cases/EFF_ACLU_v_DoJ/960612_clinton_cda_decision.statement>.

[4] Daniel Weitzner, Deputy Director, Center for Democracy and Technology, quoted in Peter H. Lewis, Microsoft Backs Ratings System for Internet, NEW YORK TIMES, Mar. 1, 1996 at D5; see Jerry Berman & Daniel J. Weitzner, Abundance and User Control:Renewing the Democratic Heart of the First Amendment in the Age of Interactive Media, 104 YALE L.J. 1619, 1634-35 (1995).

[5] Anti-pornography activists, by contrast, have been decidedly less enthusiastic. They express concern that such software leaves "the parent responsible to go out and buy the software, become educated about how to apply it, how to install it, how to use it, and how then to monitor it to make sure your child or his friends have not gotten around it." Pornography on the Internet: Straight Talk from the Family Research Council (radio broadcast transcript available at <http://www.townhall.com/townhall/FRC/net/st96g1.html>) (statement of Colby May, Senior Counsel, American Center for Law and Justice). To say that blocking software obviates the need for government speech restrictions, they continue, is "saying . . . that we are free to pollute our cultural environment, and parents have to buy the gas masks." Id. (statement of Kristi Hamrick, moderator).

[6] See Paul Resnick & James Miller, "PICS: Internet Access Controls Without Censorship" (1996), available at <http://www.w3.org/pub/WWW/PICS/iacwcv2.htm>. PICS was developed by the World Wide Web Consortium, the body responsible for developing common protocols and reference codes for the evolution of the Web, with the participation of industry members and onlookers including Apple, America Online, AT&T, the Center for Democracy and Technology, Compuserve, DEC, IBM, MCI, the MIT Laboratory for Computer Science, Microsoft, Netscape, Prodigy, the Recreational Software Advisory Council, SafeSurf, SurfWatch, and Time Warner Pathfinder.

[7] Microsoft calls its World Wide Web browser Internet Explorer; Content Advisor first appeared in Internet Explorer's version 3.0.

 Content Advisor makes it easiest to use RSACi ratings. RSACi is an Internet ratings system established by the Recreational Software Advisory Council (RSAC), which was created by the Software Publishers Association in 1994 to create a rating system for computer games. RSAC formed a working party in late 1995, including representatives from Time Warner Pathfinder, AT&T, PICS and Microsoft, to develop RSACi. See infra note 12 and accompanying text.

 For Content Advisor to use ratings from a PICS-compliant ratings service other than RSACi, the user must copy that service's .RAT file into his Windows System folder, and then click on "Add A New Ratings Service" in the IE Options menu. See Safesurf March News, <http://www.safesurf.com/nletter/summ96.htm>.

[8] American Civil Liberties Union v. Reno, 929 F.Supp. 824, 830-49 (E.D. Pa. 1996).

[9]While it is difficult to ascertain with any certainty how many sexually explicit sites are accessible through the Internet, the president of a manufacturer of software designed to block access to sites containing sexually explicit material testified in the Philadelphia litigation that there are approximately 5,000 to 8,000 such sites, with the higher estimate reflecting the inclusion of multiple pages (each with a unique URL) attached to a single site. The record also suggests that there are at least thirty-seven million unique URLs. Accordingly, even if there were twice as many unique pages on the Internet containing sexually explicit materials as this undisputed testimony suggests, the percentage of Internet addresses providing sexually explicit content would be well less than one tenth of one percent of such addresses.
 
 

Shea v. Reno, 930 F. Supp. 916, 931 (S.D.N.Y. 1996) (citations omitted).

[10] The term is an acronym for "Uniform Resource Locator." See Internet Engineering Task Force, Uniform Resource Locators [RFC-1738] (1994), available at <ftp://ds.internic.net/rfc/rfc1738.txt>.

[11] See Resnick and Miller, supra note 6.

[12] The heavy hitters behind RSACi have been RSAC, established in 1994 by the Software Publishers Association, and such industry players as Microsoft, AT&T and Time Warner Pathfinder. See supra note 7. As of April 15, 1996, content providers have been able to rate their own sites using the RSACi system by completing a questionnaire at the RSAC web site, <http://www.rsac.org>.

[13] SafeSurf describes itself as an "international no-fee parents' organization formed to protect children on the Internet and the rights of parents through technology and education." SafeSurf press release (Apr. 18, 1996), <http://www.safesurf.com/press/press12.htm>. It uses the CyberAngels, an offshoot of Curtis Sliwa's Guardian Angels, to patrol sites whose owners have rated them as suitable for "All Ages," verifying that the sites do not contain adult content. E-mail from Wendy Simpson, President, SafeSurf to Declan McCullagh, Apr. 28, 1996 (on file with author).

[14] The categories are: profanity, heterosexual themes, homosexual themes, nudity, violence, intolerance, glorifying drug use, other adult themes, and gambling. See SafeSurf Rating System, <http://www.safesurf.com/classify/index.html>.

[15] See CyberNOT List Criteria, <http://www.microsys.com/cyber/cp_list.htm>.

[16] See Specs Glossary, <http://www.newview.com/cust/ss_sg_lvl3a_cf_fcs.html>.

[17] See CYBERSitter Product Information, <http://solidoak.com/cysitter.htm>. CYBERSitter characterizes its customers as "strong family-oriented people with traditional family values." The product is sold by Focus on the Family, a conservative group. See Brock N. Meeks & Declan B. McCullagh, "Jacking In from the 'Keys to the Kingdom' Port," CYBERWIRE DISPATCH, July 3, 1996, available at <http://www.eff.org/pub/Publications/Declan_McCullagh/cwd.keys.to.the.kingdom.0796.article>.

[18] For the most part, Cyber Patrol drops all but the first three characters of the filename in the URL, thus blocking some innocuous pages. See Meeks & McCullagh, supra note 17. In some instances, Cyber Patrol blocks at a higher level, so that (for example) it excluded all of Jewish.com because personal ads were not stored in a separate subdirectory. Eric Berlin & Andrew Kantor, Who Will Watch the Watchmen?, available at <http://www.iw.com/current/feature3.html>. In at least one case, Cyber Patrol has blocked entire Internet service providers. See infra note 19.

 Surfwatch has suggested to content providers that they accommodate its decision to block on the directory level by segregating their adult material in separate directories. See Censorship sucks, <http://cocacola.whoi.edu/~scott/surf.html>; e-mail from Chris Kryzan for wide distribution, c. June 15, 1995 (on file with author); see also surfwatching, <http://www.links.net/dox/surfwatch.html>.

[19] According to an article in the electronic version of Internet World, CYBERSitter blocks all sites hosted by cris.com. A CYBERSitter representative suggests that Internet service providers "are responsible for their content," and that the owners of cris.com are to blame because they "will not monitor their [customers'] sites." According to another recent report, CYBERSitter now blocks all sites at the WELL, the pioneering California electronic community and Internet access provider. E-mail from Bennett Hazelton to the fight-censorship mailing list, Oct. 27, 1996 (on file with author). It has threatened to block at least one other large service provider. See infra note 31 and accompanying text.

 Cyber Patrol blocks all pages hosted by crl.com (including a real estate agency's Web pages). A Cyber Patrol representative says that the company is reviewing its policy of blocking entire domains. Berlin & Kantor, supra note 18.

[20] The RSACi and Safesurf systems allow content providers to label each page individually, and that should also be true for future self-rating systems. At a recent PICS developers' workshop, the prevailing view was that filtering software should expect to find PICS-compliant labels only in the individual documents, not in the directories or elsewhere in the site. See PICS Developers' Workshop Summary, <http://www.w3.org/pub/WWW/PICS/picsdev-wkshp1.html>. This makes sense only in the context of software that makes the blocking decision for each page individually.

 The more difficult question is whether the problem will persist in connection with third-party rating services.

At least one third-party rating service (Specs for Kids) does rate on the document level, and such a service should be more attractive to many users; the question is whether the increased sales will outweigh the added expense of that granularity. The recent CyberSitter / Peacefire controversy, see infra note 31 and accompanying text, suggests that my optimism may be unfounded.

[21] For sheer wackiness, though, nothing can match a CYBERSitter feature that causes Web browers to white out selected words but display the rest of the page (so that the sentence "President Clinton opposes homosexual marriage" would be rendered "President Clinton opposes marriage" instead). See e-mail from Solid Oak Software Technical Support to Bob Stock, Oct. 24, 1996 (on file with author).

[22] See Risks Digest Vol. 18: Issue 7 (Apr. 25, 1996), <http://catless.ncl.ac.uk/Risks/18.07.html#subj3.1>. On string-recognition software, see infra notes 56 & 70-71 and accompanying text.

[23] See Meeks & McCullagh, supra note 17; <http://www.mit.edu/activities/safe/labeling/cp-bans-animal-rights>.

[24] <http://www.qrd.org>. The Health/AIDS directory at QRD, for example, contains information from the Centers for Disease Control and Prevention, the AIDS Book Review Journal, and AIDS Treatment News. See Meeks & McCullagh, supra note 17.

[25] See Surfwatch Censorship Against Lesbigay WWW Pages, <http://www.utopia.com/mailings/censorship/Surfwatch.Censorship.Against.Lesbigay.WWW.Pages.html>; e-mail from Chris Kryzan, supra note 18.

[26] See Meeks & McCullagh, supra note 17.

[27] Id.

[28] Id. Indeed, Cyber Patrol apparently blocks all of the alt.support groups (including, for example, alt.support shyness and alt.support.depression), along with such groups as alt.war.vietnam and alt.fan.frank-zappa. E-mail from Declan McCullagh to the fight-censorship mailing list, Oct. 4, 1996 (on file with author); Cyber Patrol: The Truth,

 <http://www.canucksoup.net/CYBERWHY.HTM>.

[29] Meeks & McCullagh, supra note 17. Among the other WWW sites blocked by Cyber Patrol is that of the Campaign for Real Ale, a British consumer group dedicated to preserving and promoting traditional pubs and independent breweries. Cyber Patrol lists it as an alcohol & tobacco CyberNOT. Charles Arthur, Real Ale is Too Strong for the American Moralists, THE INDEPENDENT (LONDON), July 22, 1996, at 11.

[30] Meeks & McCullagh, supra note 17. It also refuses any site whose URL contains "sinnfein" or "facism." Berlin & Kantor, supra note 18.

[31] See Wired News, Dec. 10, 1996, <http://www.wired.com/news/story/901.html>. CYBERSitter has threatened to block all 2500 domain names hosted by Peacefire's Internet service provider if the provider refuses to delete Peacefire's site. Id.

[32] See Meeks & McCullagh, supra note 17.

[33] One exception: Specs for Kids makes it possible, without buying a copy of the program, to discover the ratings it gives particular sites. See <http://www.newview.com/cust/ss_chsp_lvl2.html>.

[34] Jonathan Wallace, "Why I Will Not Rate My Site" (1996), available at <http://www.spectacle.org/cda/rate.html#report>.

[35] <http://www.spectacle.org/695/ausch.html>.

[36] Wallace, supra note 34.

[37] Or she could take an approach falling somewhere in between. The polar models I describe in text, though, provide a useful way of looking at the problem.

[38] See Duncan Kennedy, Form and Substance in Private Law Adjudication, 89 HARV. L. REV. 1685, 1685, 1698-89 (1976); Pierre Schlag, Rules and Standards, 33 UCLA L. REV. 379, 379-80, 383-98 (1985); Kathleen Sullivan, The Supreme Court, 1991 Term--Forward: The Justices of Rules and Standards, 106 HARV. L. REV. 22, 58-59 (1992); Jonathan Weinberg, Broadcasting and Speech, 81 CALIF. L. REV. 1101, 1167-69 (1993); see also Cass R. Sunstein, Problems with Rules, 83 CALIF. L. REV. 953, 956 (1995) (adopting somewhat different terminology, but contrasting "two stylized conceptions of legal judgment": "clear, abstract rules laid down in advance of actual applications" and "law-making at the point of application through case-by-case decisions, narrowly tailored to the particulars of individual circumstances").

[39] See Jim Miller et al., "Rating Services and Rating Systems (and their Machine Readable Descriptions)" (revision 5, last modified May 5, 1996), <http://www.w3.org/pub/WWW/PICS/services.html>, at Appendix B. There is a slightly different description of the RSACi categories at Rating the Web, <http://www.rsac.org/why.html>.

[40] See SafeSurf Rating System, <http://www.safesurf.com/classify/index.html>.

[41] See Solid Oak Software, Inc. VCR Rating System, <http://www.solidoak.com/vcr/htm>.

[42] Specs Glossary, <http://www.newview.com/cust/ss_sg_lvl3a_cf_fcs.html>.

[43] It is inevitable that folks will disagree as to the consequences of an item of speech being pigeonholed in a particular category -- whether, for example, youngsters should be exposed to scenes of murder and mayhem. My point about the examples in text is that different folks will come to different conclusions regarding which pigeonhole a given item of speech should be deemed to occupy in the first place.

[44] Elsewhere in the Specs rating system, one can find definitions that seem standard-like because it's doubtful that they mean what they say. For example, the Specs default settings allow nobody under 18 to view Web sites that "[a]ttempt[] to persuade the viewer to join a specific political group." Specs Glossary, <http://www.newview.com/cust/ss_sg_lvl3a_cf_fcs.html>. If this is not to exclude the Democratic and Republican parties (say), the test must embody some unarticulated assumptions about which political appeals are the pernicious ones -- rendering the de facto category standard-like and unconstrained.

[45] See sources cited supra note 38.

[46] See Kennedy, supra note 38, at 1688-89; Sullivan, supra note 38, at 62-63; see also Frederick Schauer, Formalism, 97 YALE L.J. 509, 539-40 (1988); Sunstein, supra note 38, at 972-77.

[47] See Schauer, supra note 46, at 512 n. 8:

 [Consider] the transformation of the 'honor codes' at various venerable universities. These codes were phrased in quite general terms at their inception in the 18th and 19th centuries because these schools contained homogenous student bodies who shared a common conception of the type of conduct definitionally incorporated within the word "honor." If a person thought that purchasing a term paper from a professional term paper service was consistent with being honorable, then that person simply did not know what "honor" meant. As values have changed and as student bodies have become less homogenous, however, shared definitions of terms such as "honor" have broken down. Some people now do think that buying a term paper can be honorable, and this breakdown in shared meaning has caused general references to "honor" to be displaced in such codes by more detailed rules. There may now be little shared agreement about what the precept "be honorable" means, but there is considerable agreement about what the rule "do not purchase a term paper" requires.

[48] Steve G. Steinberg, Seek and Ye Shall Find (Maybe), WIRED, May 1996, at 108, 112-13.

[49] Id. at 113; see also Leslie Walker, On the Web, A Catalouge of Complexity, WASH. POST, Nov. 20, 1996, at F17; The total librarian, THE ECONOMIST, available at <http://www.economist.com/review/rev9/rv12/review.html>.

[50] Before You Begin, <http://www.rsac.org/images/spmenu.map?300,0>.

[51] Indeed, for just that reason, First Amendment philosophy largely proscribes standards on the level of operative First Amendment doctrine. Speech-regulatory law, the Supreme Court has explained, must be expressed in hard-edged, nondiscretionary terms so as to minimize the possibility of government arbitrariness or bias. Situationally sensitive judgment by government officials, making speech-regulatory decisions turn on "the exercise of judgment and the formation of an opinion," is forbidden. Forsyth County v. Nationalist Movement, 112 S. Ct. 2395, 2401-02 (1992) (quoting Cantwell v. Connecticut, 310 U.S. 296, 303 (1940)); see Weinberg, supra note , at 1169-70; see also Frederick Schauer, The Second-Best First Amendment, 31 WM. & MARY L. REV. 1, 14-17 (1989).

[52] See Kennedy, supra note 38, at 1689.

[53] Schauer, supra note 46, at 510.

[54] See id. at 534-37; Sunstein, supra note 38, at 992-93.

[55] See Kennedy, supra note 38, at 1689; Sullivan, supra note 38, at 62; Sunstein, supra note 38, at 994; Weinberg, supra note 38, at 1168-69.

[56] See Richard A. Knox, "Women Go On Line to Decry Ban on 'Breast,'" BOSTON GLOBE, Dec. 1, 1995, at A12. This incident was likely the result of string-identification software. String-identification programs are excellent examples of rules-based filtering systems.

[57] In fact, RSACi is seriously flawed for reasons having nothing to do with rules and standards. The original RSAC rating system was designed for video games. RSACi carries over the categories and language of the earlier video-game rating system even where they are completely inappropriate. Thus, for example, RSACi's definition of "aggressive violence" on a web page excludes acts of nature "such as flood, earthquake, tornado, hurricane, etc., unless the act is CAUSED by Sentient Beings or Non-sentient Objects in the game or where the game includes a character playing the role of 'God' or 'nature' and the character caused the act." See RSACi ratings dissected, <http://www.antipope.demon.co.uk/charlie/nonfiction/rant/rsaci.html>. One consequence of RSAC's approach is that the Internet rating system nowhere acknowledges a distinction between images and text.

[58] In the wake of recent political speculation about bomb making recipes on the Internet, five major rating services agreed to work together to "ensure that parents can block Internet sites containing weapons and bomb making information and recipes." SafeSurf press release, "SafeSurf Enables Parents to Block Internet Bomb Sites," <http://www.safesurf.com/press/press16.htm>.

[59] The Specs default setings, for example, would deny to persons under 18 any "[m]aterial defaming one or more social groups or members of such groups." See Specs Glossary, <http://www.newview.com/cust/ss_sg_lvl3a_cf_fcs.html>, & Specs: Age Defaults, <http://www.newview.com/cust/ss_stsp_lvl2.fcs.html>.

[60] Specs classes all nudity as either "in an artistic or educational context," or "with the principal purpose of exciting the viewer." Specs Glossary, <http://www.newview.com/cust/ss_sg_lvl3a_cf_fcs.html>.

[61] See supra text accompanying note 17.

[62] To the extent that a user does not agree, the service will block sites he would want admitted, or let through sites he would want shut out, or both.

[63] See Using Content Advisor, <http://microsoft.com/ie/most/howto/ratings.htm>.

[64] As a ratings system multiplies parental choice, it becomes more complex, and, perhaps, harder to use. As a practical matter, rating system designers will have to balance fine differentiation of the ratings system against ease of use. See J.M. Balkin, Media Filters, the V-Chip, and the Foundations of Broadcast Regulation, 45 DUKE L.J. 1131, 1172 (1996). Cf. Solid Oak Software, Inc.VCR Rating System, <http://www.solidoak.com/vcr.htm> (arguing that a self-rating system should be "extremely simple," in contrast to "PICS compliant ratings systems where there are several dozen possible ratings").

[65] The Federal Communications Commission relies on an emphatically standard-like guide to determine whether speech broadcast on television and radio is "indecent." The resulting uncertainties have subjected the agency to critical attack. See Jonathan Weinberg, Vagueness and Indecency, 3 VILL. SPORTS & ENT. L. J. 221 (1996). The FCC, though, is a single entity; its choices will display far more consistency than those of millions of disconnected content providers each evaluating their own sites.

[66] All of these areas of speech trigger Cyber Patrol blocking (except for tatooing and body piercing, which constitute a CyberNOT only to the extent they result in "gross depictions"). See CyberNOT List Criteria, <http://www.microsys.com/cyber/cp_list.htm>. Tatooing and body piercing are specifically blocked by Specs (in a category, called "subjects of maturity," that lumps them in with "illegal drugs, weapon making . . . [and] some diseases." Specs Glossary, <http://www.newview.com/cust/ss_sg_lvl3a_cf_fcs.html>.

Cf. Balkin, supra note 64, at 1166-68 (discussing competing images of "what characteristics count in making programming unsuitable for children").

[67] Rating the Web, <http://www.rsac.org/why.html>.

[68] Or if unrated sites are either all innocuous or all verboten, so that blocking software can treat them according to a single rule.

[69] Rating services are only beginning to confront the issue of updating ratings for particular sites as their contents change. RSACi provides one-year expiration dates for its labels. SafeSurf is providing no expiration dates, instead simply enjoining content providers to update their ratings if there is a material change in the content of their speech. See PICS Developers' Workshop Summary, supra note 20.

[70] See supra notes 22 & 56 and accompanying text.

[71] See Douglas Bailey, Couplegate, BOSTON GLOBE, Feb. 22, 1996, at 54. The page displays pictures of Bill and Hillary Clinton and Al and Tipper Gore.

[72] The PICS Standard, <http://www.microsoft.com/intdev/sdk/docs/ratings/ratng001.htm>.

[73] See Whit Andrews, "Site-Rating System Slow to Catch On," WEB WEEK, July 8, 1996 (quoting Compuserve representative Jeff Shafer), available at <http://www.webweek.com/96July8/comm/rating.html>; Specs FAQs: Quick Quest: General Info, <http://www.newview.com/cust/ss_qq_lvl3a_reg.html> (recommending that users select the option of blocking all unrated sites "to ensure a safe Internet environment"); e-mail from Andy Oram, O'Reilly and Associates, to telecomreg mailing list (May 21, 1996) (on file with author).

[74] Few WWW sites today carry self-rating labels. See Andrews, supra note 73 (only two of the "more than 50" sites listed in the Entertainment Magazine: Sex category at Yahoo! carry self-ratings); see also Hiawatha Bray, "Rated P for Preemptive: System to Shield Kids From Adult Web Material Also Seeks to Keep Censors Off Net," BOSTON GLOBE, Jul. 25, 1996, at E4 ("only a tiny percentage" of Web sites have RSACi ratings).

[75] See Lewis, supra note 4; RSAC, Rating the Web, <http://www.rsac.org/why.html>. Cf. Balkin, supra note 64, at 1164 (discussing the V-chip).

[76] See Andrews, supra note 73. There are two separate problems here. The less important one is the technical issue of affixing a rating to each individual page. As noted supra note 20, the prevailing view at a recent PICS developers' workshop was that filtering software should expect to find PICS-compliant labels in each document; content providers cannot get away with supplying "blanket" ratings at the directory level or higher. On the other hand, it should be easy to develop software that will automatically insert labels into Web pages, so long as all of the pages on a site carry the same rating. See PICS Developers' Workshop Summary, supra note 20.

 The more important problem arises when a content provider must audit each page of a large archive to determine what rating that page should receive. Robert Croneberger, director of the Carnegie Library in Pittsburgh, testified at the ACLU v. Reno trial that he would have to hire 180 additional staff in order to search the library's on-line materials (in particular, its on-line card catalog) so as to be able to tag individual potentially indecent items. Trial Transcript for Mar. 22, 1996, at 101-02, American Civil Liberties Union v. Reno, 929 F.Supp. 824 (E.D. Pa. 1996) (No. 96-963), available at

 <http://www.eff.org/pub/Legal/Cases/EFF_ACLU_v_DoJ/960322_croneberger.testimony>.

[77] See Recreational Software Advisory Council, "RSACi: RSAC on the Internet: Business Plan" (on file with author); How to Register Software Titles, <http://www.rsac.org/register.html>; see also Microsoft Press Release, Feb. 28, 1996, <http://www.microsoft.com/corpinfo/press/1996/feb96/rsacpr.htm> ("To encourage widespread rating of Internet content, RSAC will make its rating application available for no charge for the first year it is available on the Internet.").

[78] See Trial Transcript vol. III, at 192:3-4, American Civil Liberties Union v. Reno, 929 F. Supp. 984 (E.D. Pa. 1996) (No. 96-963) (testimony of Barry Steinhardt).

[79] See supra notes 34-36 and accompanying text.

80 487 U.S. 781 (1988).

[81] Id. at 795.

[82] Id. at 798, 800. The Court noted that "[p]urely commercial speech is more susceptible to compelled disclosure requirements." Id. at 796 n. 9 (citing Zauderer v. Office of Disciplinary Counsel, 471 U.S. 626 (1985)); see also Hurley v. Irish-American Gay, Lesbian and Bisexual Group, 115 S. Ct. 2338, 2347 (1995). A self-rating requirement, though, would affect noncommercial as well as commercial speech. The Justices have noted that a state can compel doctors to make certain disclosures as part of the practice of medicine, see Planned Parenthood v. Casey, 505 U.S. 833, 884 (1992) (opinion of O'Connor, Kennedy, & Souter, JJ.), but that isn't this case either.

83 115 S. Ct. 1511 (1995).

[84] Id. at 1518.

[85] Id. at 1519-20.

[86] See also Hurley v. Irish-American Gay, Lesbian and Bisexual Group, 115 S. Ct. 2338 (1995):

 [O]ne important manifestation of the principle of free speech is that one who chooses to speak may also decide what not to say. . . . [Except in the context of commercial advertising,] this general rule, that the speaker has the right to tailor the speech, applies not only to expressions of value, opinion, or endorsement, but equally to statements of fact the speaker would rather avoid . . . . Nor is the rule's benefit restricted to the press, being enjoyed by business corporations generally and by ordinary people engaged in unsophisticated expression as well as by professional publishers. Its point is simply the point of all speech protection, which is to shield just those choices of content that in someone's eyes are misguided, or even hurtful.

Id. at 2347-48 (citations and internal quotation marks omitted).

[87] Meese v. Keene, 481 U.S. 465 (1987), is not to the contrary. The Court in that case approved statutory provisions pursuant to which the Justice Department characterized speech distributed by foreign agents as "political propaganda." The disseminator of the speech, however, wasn't required to characterize it in that manner. The case was about the extent to which the government can perjoratively characterize a person's speech, not about the extent to which government can force a person to characterize her own speech, perjoratively or otherwise.

[88] See Hurley, supra; Pacific Gas & Electric Co. v. Public Utilities Comm'n, 475 U.S. 1, 15 (1986) (plurality opinion) (government may not "require[ speakers] to associate with speech with which [they] may disagree," nor force them to "alter their speech to conform with an agenda they do not set"); Wooley v. Maynard, 430 U.S. 705, 714 (1977) (government may not compel a citizen to "be an instrument for fostering public adherence to an ideological point of view he finds unacceptable"); West Virginia State Bd. of Educ. v. Barnette, 319 U.S. 624, 642 (1943) (government may not prescribe an orthodoxy and "force citizens to confess [it] by word").

[89] See supra text accompanying note 42.

[90] Baggett v. Bullitt, 377 U.S. 360, 372 (1964) (quoting Speiser v. Randall, 357 U.S. 513, 526 (1958)).

[91] Interstate Circuit, Inc. v. Dallas, 390 U.S. 676, 688 (1968); see also MPAA v. Specter, 315 F. Supp. 824 (E.D. Pa. 1970).

[92] See supra notes 76-77, 89-90 & accompanying text.

[93] The Court followed a similar reasoning process in McIntyre. The state in that case supported its ban on anonymous election materials by pointing to its interest in policing fraudulent statements and libel in election campaigns. The Court noted, though, that other provisions of state election law barred the making or dissemination of false statements. The value of the challenged provision was merely incremental. That incremental benefit could not justify the damage the provision did to free speech. 115 S. Ct. at 1520-22.

[94] See supra notes 72-73 and accompanying text.

[95] ACLU v. Reno, 929 F.Supp. 824, 833 (E.D. Pa. 1996) (opinion of Dalzell, J.).

[96] Id. at 877, 883.

[97] Id. at 881-82 (quoting Leathers v. Medlock, 499 U.S. 439, 448-49 (1991)).

[98] This concern is most salient in connection with blocking programs, such as Microsoft's Content Advisor, that block any access to restricted sites through the computer on which the program is installed, unless the program is disabled. Other programs, including Cyber Patrol, offer a more advanced feature known as "multiple user profiles": Each family member can have his or her own password, and the program can be configured at the start to grant the different password holders different levels of access. These programs make it easy for a parent to enable his own access to the sites he seeks to exclude his child from; they probably will become more common in the future. See PICS Developers' Workshop Summary, supra note 20 (noting the formation of a working group to specify formats for describing PICS user profiles). Parents may be wary of this feature, though, since the exclusion is only as secure as the parent's (frequently-used) password.

[99] See, e.g., BESS.NET: The Service, <http://demo.bess.net/about_bess/the_service.html>. Safesurf is now providing technology -- the Safesurf Internet Filtering Solution -- that allows any ISP to offer parents this easy option. See Rose Aguilar, Marketplace Site Filters Criticized (Oct. 18, 1996), <http://www.news.com/News/Item/0,4,4609,00.html>.

 Even some households without children, if blocking software comes bundled with their Web browser, may choose to enable that software because they believe that otherwise they may be confronted with smut. Indeed, according to RSACi's Executive Director, Microsoft has indicated that future versions of Internet Explorer may activate blocking as a default. See Joshua Quittner, [title], THE NETLY NEWS, Dec. 13, 1996, <http://pathfinder.com/Netly/daily/961213.html>.

[100] When a user running blocking software seeks to conduct a search using an Internet search engine such as Alta Vista, the software will transmit the user's filtering rules to the search engine. The search engine will tailor its search so as not to return any sites excluded by the filter. Participants at the recent PICS Developers' Workshop agreed that this was the preferable approach, in part because it would be undesirable for users to get search results like "'here's the first 10 responses, but 9 of them were censored by your browser.'" PICS Developers' Workshop Summary, supra note 20; see also PICS, <http://www.w3.org/pub/WWW/PICS> (Frequently Asked Questions).

[101] See, e.g., Gary Chapman, Universal Service Must First Serve Community, LOS ANGELES TIMES, June 3, 1996, at D1. See generally ROBERT H. ANDERSON ET AL., UNIVERSAL ACCESS TO E-MAIL: FEASIBILITY AND SOCIETAL IMPLICATIONS (1995), chap. 3 (discussing pros and cons of locating devices for e-mail access in the home, at work, in schools, and in libraries, post offices, community centers and kiosks).

[102] See QUESTIONS AND ANSWERS: Access to Electronic Information, Services, and Networks: An Interpretation of the Library Bill of Rights, <http://ala1.ala.org:70/0/alagophx/alagophxfreedom/electacc.q%26a>; Access to Electronic Information, Services, and Networks: An Interpretation of the Library Bill of Rights, <http://ala1.ala.org:70/0/alagophx/alagophxfreedom/electacc.fin>.

[103] I do not want to overplay this point. Many libraries have decided not to install blocking software. See, e.g., "Ann Arbor [Michigan] District Library Internet Use Policy" (on file with author). Other libraries have installed the software only on terminals intended for use by children. I am grateful to Linda Mielke, President of the Public Library Association and Director of the Carroll County (Maryland) Public Library, Kathleen Reif, Director of the Wicomoco County (Maryland) Free Library, and Naomi Weinberg, President, Board of Trustees, Peninsula Public Library [Lawrence, New York] for educating me on these issues.

[104] Pornography and Gambling are Inappropriate in a Library, <http://www.librarysafe.com/library.html> (typeface in original). The vendor is the Library Safe Internet System.

 Librarians have shown great courage in their decisions to carry controversial books and artworks. They have repeatedly demonstrated their willingness to carry items of sexually explicit material that they consider valuable. Thus, recent cases litigated by the Freedom to Read Foundation include Lowe v. Kiesling, challenging a proposed ballot measure that would have forbidden public libraries from collecting any materials on homosexuality writen for children, and Ong v. Salt Lake City Public Library, in which plaintiffs sought to bar a public library from exhibiting art including nudity. See Freedom to Read Foundation News, Vol. 20, Nos. 3-4, <http://www.sirs.com/partner/read/v20n3.htm>; Freedom to Read Foundation News, Vol. 19, Nos. 2-3, <http://www.sirs.com/partner/read/v19n2.htm>. By providing an uncensored Internet feed, though, a library makes available material much more difficult to defend in a political context.

[105] See Rosilind Retkwa, Corporate Censors, INTERNET WORLD, Sept. 1996, at 60; see also, e.g., Microsystems Announces Cyber Patrol Proxy (July 31, 1996), <http://www.microsys.com/prfiles/proxy796.htm>.

[106] Retkwa, supra note 105, at 61.

[107] The university was the University of Arkansas at Monticello. See e-mail from Carl Kadie to fight-censorship mailing list, Oct. 22, 1996; e-mail from Tyrone Adams to amend1-L mailing list, Oct. 22, 1996; e-mail from Stephen Smith to Jonathan Weinberg (Oct. 22, 1996) (all on file with author).

[108] It's possible that Disney or a similar entity will get large market share with a kid-centered interface limiting users to a specific list of kid-friendly sites, and that that interface will be so agressively child-oriented that adults won't use it (and can't be suckered into using it). Even so, employers and similar entities using the Net will still have an interest in installing a grown-up interface with blocking capabilities, and presumably the market will respond to that.

[109] Consider America Online. It markets itself as a family-friendly service. It allows parents to confine their children to a "Kids Only" area, or to disallow their access to chat rooms and Usenet news. It monitors the use of forbidden words in various contexts. It censors messages posted in the advertisers' area it calls "Downtown AOL," removing advertisements that its manager thinks do not have "the look and feel that best fits [AOL]'s environment." At the same time, though, it allows its members to create chat rooms with names like "m needs bj now," "bond and gaged f4f," and "M4Fenema."

[110] Illegal Material on the Internet, <http://dtiinfo1.dti.gov.uk/safety-net/r3.htm>. Under the initial proposal, users were to rate with RSACi, see id.; the proponents apparently have backed away from that. The proposal also recommends that ISPs take steps to support rating and filtering of Usenet newsgroups. For an explanation of the mechanics of the Usenet proposal, see Turnpike News Ratings, <http://www.turnpike.com/ratings>.

[111] See Communication to the European Parliament, the Council, the Economic and Social Committee and the Committee of the Regions (Oct. 16, 1996), available at <http://www2.echo.lu/legal/en/internet/content/communic.html>. A recent EU working group document recommends research into new rating systems, not RSACi, so as to "take account of Europe's cultural and linguistic diversity" and "guarantee respect of [users'] convictions." Report of Working Party on Illegal and Harmful Material on the Internet, <http://www2.echo.lu/legal/en/internet/content/wpen.html>.

 Other countries have adopted more drastic approaches. See, e.g., Declan McCullagh, The Malaysian Solution, The Hetly News, Dec. 9, 1996, <add url>; Kathy Chen, China Bans Internet Access To as Many as 100 Web Sites, WALL STREET J., Sep. 5, 1996, available at <http://www.eff.org/~declan/global/china/china.bans.090596.txt>; James Kynge, Singapore cracks down on Internet,

 FINANCIAL TIMES (LONDON), July 12, 1996, available at <http://www.eff.org/~declan/global/sg/ft.071296.article>.

[112] See RSAC Press Release, Compuserve to Rate Internet Content by July 1 (May 9, 1996), available at <http://www.rsac.org/press/960509-1.html>.