Archive

An Ethical Framework for Biological Sample Policy

DR. BUCHANAN: Thank you. I'm honored to be here. Since we are short on time, I'll leave it up to you to cut me off at the appropriate point. I want to definitely hear from Frank. And I think I have the advantage of having already given you a large text, whereas he hasn't, as I understand it. First let me briefly thank Tom Murray and Eric Meslin as well as the members of the Genetics Subcommittee for helpful guidance I received in writing the paper. I benefited from the transcripts of the Subcommittee quite a lot. Let me mention a couple of fairly minor changes that I intend to include in the final version of my paper. First, as some members of the Subcommittee have pointed out and as is noted in Courtney Campbell's commissioned paper, there's one item that should be added to my list of interests that weigh in favor of more liberal access to samples. The interest that some individuals who may be sources of samples take in contributing to medical progress and scientific research . . . I talked about the societal interest in medical progress and scientific research, but I didn't give any explicit attention to the fact that there are definitely some individuals who have strong commitments to contribute to that process. Second, with respect to previously collected samples that are individually identifiable, but which were collected without adequate informed consent, in the fourth draft of my paper I proposed that sources be given a choice of either having the sample rendered nonidentifiable to them or giving a blanket consent to identifiable uses of the sample with various safeguards and with the possibility of a requirement of specific consent for certain sensitive uses that might fall into what I call the "special scrutiny category." I'm now inclined to add a third option, though I'm not certain about this, namely that the source be given the right to have the sample destroyed after being given a suitable explanation of the possible cost to himself or herself and to others that the destruction of the sample might entail. This seems to me to be at least a plausible suggestion if we're talking about cases in which the sample was not collected under conditions of anything approaching informed consent. Now those are two modifications, for those of you who've had a chance to read the paper. Before hearing their comments and attempting to answer any specific questions, I'd like to take just a few moments to iterate the strategy of my approach and to address the issue of how the various interests I identify are to be weighed against one another in order to determine sound policy recommendations. And I'd also like to explain briefly how my analysis supports what I take to be several recommendations toward which the Committee's deliberations seem to be pointing. First of all a word about strategy. My strategy is to dig beneath or behind the usual rhetoric about rights to privacy and confidentiality versus the value of scientific progress. That's the way the issue's often framed. My strategy depends upon an assumption, which I defend in the paper and which I'm prepared to defend at greater length, namely viewing rights as protectors of morally important interests that individuals have, either as individuals or as members of groups. Given this assumption, the appropriate procedure is to identify all the morally legitimate interests that weigh either in favor of greater control by the source over the sample's uses or in favor of wider and less constrained access to the sample for various uses. And here perhaps a cautionary word is in order. It's not a matter of taking an interest-based approach versus a morality-based approach. I'm only identifying what I take to be morally legitimate interests and then the moral analysis comes in trying to see what the relative weights of these interests are. So although I talk a lot about interest, it shouldn't be understood to be simply a kind of might-makes-right approach or looking at which interests are most politically powerful or anything of that sort. It's definitely a moral analysis. Now let me also comment very briefly on the sense in which the approach that I advocate in the paper is a secular perspective on the ethical issues. It's very important that this is not to be misunderstood. It's a secular perspective in the sense that I have not explicitly invoked any distinctively religious principles or ideals in the analysis. However, this is not to say that there's anything in my analysis which should be controversial or repugnant from the standpoint of any of the major religious viewpoints. I focused exclusively on the interests of persons that are at stake in the policy debate concerning stored biological samples, attempting to illuminate what sorts of broad policy options would best achieve a fair balancing of morally legitimate interests, a balancing that reflects widely held and defensible moral principles. Given that we live in a pluralistic society that contains many ethical perspectives, not all of which are religious in nature, as well as many different religious perspectives, I can imagine no plausible alternative to an approach to the ethics of public policy than one that focuses primarily on the interests of persons. And again I don't want that to be misunderstood in an overly individualistic way. Sometimes the interests of persons that are most directly relevant are their interests as members of groups that are very important to them and with which they identify. But having said that, that is that the focus in on the interests of persons, this is not to say that there's anything in my analysis that could not be framed in distinctively religious terms by individuals whose primary ethical perspective is religious. It's just a matter of which way of framing the issues is most appropriate for the public policy debate.

Now let me say something briefly about the big task, the task of weighing the various interests that I identify. I identify a lot of interests that speak in favor of more control over the uses of samples, more confidentiality, more protections for the source. On the other hand, a number of different interests that speak in favor of wider accessibility and more types of uses by more types of individuals of the samples. Let me say something about the problem of how to weigh these. We cannot simply count up the interests on each side and say that we go with the approach that has the greatest number of different types of interests in its favor. The matter is not that simple because some interests should count for more than others. Some are more morally important. And I think we can give reasons to explain why this is so in particular cases. Let me give a few examples of a number of places in the paper where I give some indication of what the relative weights of various interests are, or at least of how one would go about arguing for what the relative weights are. And also what kinds of considerations are relevant to determining the weight of various interests.

First, consider the interest in avoiding insurance discrimination. This is an example of an interest whose weight will vary depending upon how high the corresponding risk is, the risk of insurance discrimination. Risk is usually understood as the magnitude of a possible harm times the probability that the harm will occur. That is, how bad is the bad outcome and how likely is it to occur? In a system in which there is no risk rating for insurance coverage, the risk of insurance discrimination is zero, at least from any reasonable perspective. In our system the risk is greater than zero because we do have risk rating for insurance that many people get. People are sorted into different groups according to their risk of ill health. But just how high the risk is—more specifically, how great the probability of discrimination is—is a matter of dispute. Moreover, the risk of discrimination may vary depending upon the type of information that is made available to insurance companies. The main point I try to make on page 12 of the fourth draft of the paper is that how weighty the interest in avoiding insurance discrimination is will depend on the nature of the institutional arrangements, and these can change. That's why we shouldn't think in a static sort of way. What the weights of these morally legitimate interests are will change as our institutions change. And they're changing now in some cases. Another example. Consider the interest in avoiding stigmatization. If there's a high probability of serious discrimination, serious stigmatization, then clearly this interest counts for a lot relative to other interests that might range on the other side of the balance. But the risk of stigmatization can vary as cultural attitudes change. Information about some disease conditions may be, at a particular time in our history, very stigmatized. Information about other disease conditions may not be very stigmatizing at all. If our society does a better job of educating people to the fact that everybody carries several genetic mutations, for example, perhaps stigmatization will eventually not become so significant a factor. And to that extent, the case for restrictions on access to samples will be correspondingly weaker.

Third, consider the interest in controlling the profitability of one's samples. Sometimes samples that are taken from people are used to create immortalized cell lines that are used in research and sometimes profits result from this. If there were a clear, institutionally recognized property right that people had in their samples, then this interest, this interest in controlling the profitability that arises from uses of one sample, would be very weighty. It would also be very significant, this interest in profitability, if there were a well established practice, regardless of any legal property right, a well established practice of sources sharing in the profits derived from uses of their samples. But neither of these conditions is satisfied at present. So comparatively speaking, the interest people have in controlling profitability that stems from uses of their samples is not very great.

A fourth example, finally. Consider the interest in being able to control what happens to one's sample, independently of one's interest in avoiding discrimination, independently of avoiding dignatory harm, such as not being treated respectfully, and independently of avoiding nonconsensual bodily invasions. Call this the interest in control, or in choice per se. To say that whenever this interest is not fully satisfied there is a violation of a person's autonomy would be hyperbolic. As a normative concept, the concept of autonomy is better reserved for domains of choice that really matter, that affect a person's important interests and aspirations. Not everything that gives more choices increases autonomy. And that's why it's very important not to make the mistake of assuming that the interest in control over samples per se is a very weighty kind of interest, especially compared to some of the important interests on the other side.

Now let me conclude very briefly by saying how I think my report bears on some conclusions toward which the Commission may be tending, at least from what I've seen from the transcripts.

First of all, the question, "Is it permissible to use nonidentifiable collected samples without consent?" My understanding is that at least there's some sympathy in the Commission for an affirmative answer to that. My paper supports that affirmative answer; that is, I think it is permissible and I think I can give good reasons why it's permissible to use nonidentifiable collected samples without consent, except in certain cases. And the main exceptions to that general principle are where the collection of the samples itself involved human rights violations, as with some seriously immoral experimentation. And the other case that might be an exception to this general principle would be where a use of the sample falls into what I call the "special scrutiny category." For example, that the sample may be used in research on the genetic bases that certain kinds of antisocial or criminal behavior and there's a long record of racist or other misuses of this kind of research and where certain groups have good reasons to feel a special vulnerability. In those kinds of cases, some special scrutiny, which I talk about in some detail in the paper, might be advisable even though the sample is not identifiable individuals, because it may be identifiable as to the group from which the individual comes, and the group and the members in it may be vulnerable for historical reasons to negative stereotyping and other kinds of discrimination.

The second question, which as I understand it the Commission may be at least leaning toward a positive answer to: Is it necessary to achieve prospective consent when the source of the sample can be identified? I understand that there's some sympathy for an affirmative answer. I agree with that, though it may be that blanket consent, combined with special consent, for special scrutiny cases would be the most appropriate route there.

A third question to which an affirmative answer, I think, seems appropriate: Can the Commission's recommendations all be accommodated within the existing regulatory structure? I think the answer is yes. I don't see anything in my paper which would call for any new structures. I do mention various places where existing structures, including institutional review boards, might play new roles, but I don't see any need for radically new institutional arrangements.

Finally, concerning collected samples where there may be a possibility of identifying individual sources: Does a greater probability that the source might be identified necessarily require greater safeguards? Is there a kind of one-to-one correlation between the probability that the individual source can be identified and having more procedures, more safeguards and hurdles in place? I don't think so. I don't think that the relationship is that simple. I think that my analysis of the relevant interest indicates that in some cases, even though there may be some increased probability that the individual source will be identified, that's not necessarily a sufficient reason for added safeguards unless the interests which would likely be harmed by the identification occurring are sufficiently weighty.

Now, I had intended to go through this summary of the main conclusions which I provided the Commission with, which isolates about 14 different summary points. But since you already have that and since we're running short of time and I don't want to cut into Frank's time, I think I'll forego that and just open for discussion now if that's agreeable to you.

DR. SHAPIRO: Thank you very much. Let's take just a few moments for discussion now; then I'd like to turn to Frank. And then if there's more time left afterward, we can have further discussion. Maybe just one or two short questions right now.

MS. CHARO: One of the things that has come out in the discussion so far is that the existing regulations provide a balance of protectiveness of individuals with the need to do the research in the form of a waiver of the usual requirement that you obtain consent. There are many conditions for that waiver. One of the conditions is that the research only be minimal risk. And it's occurred to many of us that there is an insufficient understanding of the idea of minimal risk in this area of nonphysically invasive invasions of interests. And, although you haven't been prepped on this so it may be unfair, I wonder if you could speculate, based on your list of invasions of interests, on how one might get a handle on the usual measure of minimal versus nonminimal risk, which is by reference to our experience in everyday life. How would you evaluate the experience of invasions of those interests in everyday life as a kind of benchmark against which to measure this extra invasion by virtue of research?

DR. BUCHANAN: Well, that's an extremely important question. I think you're really entering into new territory here in trying to think about what counts as minimal risk where it's not risk of some physical mishap like bleeding from venipuncture or something like that. I mean, it's curious in a way. One of the developments in the idea of informed consent has been to extend it from just protecting basically against batteries to protecting against what I call "dignitory harms" and against other kinds of psychosocial harms. And yet there hasn't been a corresponding refinement of the idea of what counts as minimal risk there. And I don't try to do that in this paper. Perhaps I should, but those of you who struggled through 73 pages are probably thankful I didn't try to do any more than I did.

I think that my analysis helps to some extent because it at least excludes certain kinds of . . . it points out and sets aside certain kinds of misunderstandings that might come into play in trying to determine what counts as a more than minimal risk. Just the interest in choice per se, in having control over what happens to the sample, it seems to me is not an interest who's thwarting or lack of satisfaction constitutes harm in any significant sense. That's certainly true. Is the risk of insurance discrimination a more than minimal risk? I'm not sure there's any general answer to that question. You might have to look at different disease conditions and find out whether—because you're really talking about what the risk is, what the magnitude of the harm is of an insurance company finding out that you have a certain disorder and what the probability of that harm is. I think we have remarkably little information about this. As I point out in the paper, there are empirical surveys of what people think they've experienced by way of genetic discrimination—and there clearly is some genetic discrimination and discrimination on other grounds that are not genetic either, just knowing that you've had cancer or something like that, reporting accurately in your family history of the disease. As more and more of us find out that we have potentially deleterious mutations in our genome, which we all do, and especially if insurance coverage begins to extend to genetic testing and to dealing with genetic conditions when they can be treated, then both stigma and risk of discrimination will go down.

So I guess what I'm saying is I think you're on to a hugely important topic. I think my paper provides some of the initial materials for tackling it. But the main lesson to be learned at this point is that it's probably going to have to be a pretty piecemeal kind of approach which looks at a number of these different interests that I've isolated, and ask, in particular cases for particular groups of individuals, how serious is the harm if it occurs, and how probable is the occurrence of the harm? And that requires a lot of empirical data which we don't seem to have at this point.

DR. SHAPIRO: Thank you. With everyone's permission, I know Bernie has a question, others, but I would really like to turn to Frank just to make sure to give you a full opportunity to say what you would like to the Commission.

As you know, we've been talking from time to time on community interests, what this means and how we think about it, if we should think about it, and so on, a whole series of issues. And Frank is going to talk to us regarding sensitivities and concerns in these kind of research areas from the perspective of Native American communities.

Thank you very, very much for being here. We're very glad to welcome you here and very anxious to hear what you have to say.