Please do not quote without the permission of the author. Comments very welcome!

Representing the User in Software Design

Diana E. Forsythe
Associate Professor
Program in Medical Anthropology
University of California, San Francisco
1350 Seventh Ave., Room 302
San Francisco, CA 94143-0850
forsythe@smi.stanford.edu
version of 24 April 1997

Abstract

In this paper, I draw upon long-term participant observation among software designers to unpack the concept of representation in the world of knowledge-based computer systems. Among social scientists, notions such as simulation, modeling, and imaging evoke representational modes that may be quite broad and informal. To software designers, in contrast, "representation" implies an act that is both formal and specific: the explicit encoding of information in a programming language. I want to build upon this distinction to address the question of how and to what extent knowledge-based systems can be said to embody representations of their intended audience(s). From a computer science standpoint, such representations are embedded in only a small minority of such systems; this feature is known as "user modeling." In contrast, adopting a broader anthropological notion of representation, I would argue that all such systems embody images of the user. Whether or not designers see or intend this, their daily design decisions routinely reflect tacit as well as explicit assumptions about the identity and characteristics of future users. Drawing upon examples from my own experience in medical informatics, I will illustrate this contention, pointing out what some system-builders take for granted about their intended audience and the nature of their work.

Introduction

As Lynch and Woolgar have pointed out (Lynch & Woolgar 1990, p.1) a computer program is a kind of representational device. The programs known as knowledge-based systems are particularly interesting from this perspective because of their complexity. This paper emerges from a long-term anthropological inquiry into the construction of knowledge and technology in medical informatics, a field defined by the application of intelligent systems technology to problems in medicine. A central theme of my research has been to address the meaning and nature of representation in this context. This question raises interesting epistemological issues, as well as problems of agency and intentionality in relation to the practice of system-building (Forsythe 1992; Forsythe 1993a; Forsythe 1993b; Forsythe 1994; Forsythe 1996a).

This paper considers the question, what does "representation" mean in a computational context? I am led to address this issue by two factors. First, what different people take to be the meaning of "representation" in relation to a computer system can vary considerably. For example, as I will show, this notion is understood rather differently in computer science and in anthropology.

Second, I want to make a reflexive point about our own analytical practice in science and technology studies. If different understandings of the concept of representation are in use in different academic disciplines, then obviously this notion is a cultural construction rather than a stable, universal category.[1] This makes it incumbent upon us to be very clear what we mean by "representation" when we use the concept in our own analyses. Close attention to the problem of whose categories are being used to construct reality is normative in anthropology but receives less emphasis in science and technology studies.

I raise this latter point not only to locate the ethnographic material to be presented below, but also to suggest that encouraging epistemological reflexivity is one of the contributions that anthropology can make to science and technology studies.[2] Over the past two decades, the re-emerging sociology of science has done a great deal to establish the idea that the production of science and technology is socially contingent. The anthropology of science and technology that has been developing alongside it since the mid-1980s (Hess & Layne 1992; Traweek 1993) has demonstrated that the production of science and technology is culturally contingent as well. That is, although the people whose practice we study may think of their theories and analytical categories as universally true, these truths--like any other knowledge (Geertz 1973; Geertz 1983) --are located in particular cultural and disciplinary contexts.[3]

As STS researchers, we need to acknowledge that our own theories and categories are also culturally located. Such recognition not only implies epistemological care, it is also useful: conscious attention to the reflexivity it engenders is a productive research strategy.[4] Below, I explore this approach in relation to different constructions of the notion of representation.

Background

When anthropologists do field research among people very different from themselves, they face the task of trying to make the strange familiar. Observing what seems like an alien way of experiencing the world, the anthropologist searches for common meanings. Those of us engaged in field studies of science and technology production in our own society tend to encounter the opposite challenge. Faced with informants very similar to ourselves, we face the task of making the familiar strange (Forsythe 1995a) . Encountering what sound like shared meanings, we must pay attention to the question of whether what is meant is actually the same.

From 1986 to 1994, I was a full-time participant observer in five software development laboratories in the United States, four in academia and one in industry. My informants were largely trained in computer science and engineering; some had medical training as well. The unusual length of this fieldwork reflects in part the complexity of the technological processes under investigation; knowledge-based systems are typically produced collaboratively and take several years to develop. It also reflects my concern as an interpretive cultural anthropologist (Geertz 1973; Geertz 1983) to ground analysis in an understanding of what events mean to the people involved.

I had been doing this research for a number of years before I happened to notice that while my informants and I all used the notion of "representation" in our work, we seemed to be using the word in different ways. Inquiring systematically into this question, I found that the underlying concept is understood differently by computer scientists and anthropologists. In the following sections, I outline this difference in perspective and illustrate it with examples from medical informatics. I will use this field material to explore a question about a particular type of representation. The question I want to ask is under what circumstances a computer program can be said to embody representations of its intended audience--a feature known in computer science as "user modeling."

Different Meanings of Representation

When computer scientists use the word "representation," what they mean is the explicit and intentional encoding of information in a system. To represent a piece of information in this sense is to write it in computer code. For example, the knowledge bases of intelligent systems encode explicit information about task domains, problem attributes and problem-solving strategies. In addition, intelligent interfaces generally incorporate some kind of model of the user, that is, the person or type of person who is intended to use the system. All of these fall within the computer science meaning of "representation." Understood in this way, representation receives systematic attention in computer science.

In contrast, anthropologists (and--I believe--many other qualitative social scientists) generally use the term in a much broader sense, which would also include the embedding of meanings in forms other than explicit code. Over the past decade or so, anthropologists have been paying attention to the production and use of intelligent systems technology, bringing to this task an overlapping but considerably more inclusive notion of representation. These observers have noted that in addition to explicitly encoded information of the sort described above, knowledge-based systems also embody information whose inclusion is not necessarily either explicit or intended by the system-builder(s). Such information includes tacit assumptions and expectations characteristic of the cultural, disciplinary, or practice traditions of designers and/or their experts. For example, such systems have been shown to represent tacit assumptions about the nature of knowledge (Forsythe 1993b) and work (Forsythe 1993a) , beliefs about the relation between plans and human action (Suchman 1987) , expectations concerning the nature of work practice in particular settings (Sachs in press; Suchman 1992) , and cultural theories about individualism and education (Nyce & Bader 1993) . Because such assumptions and perspectives are taken for granted, they can be invisible to those who hold them. However, since they are not necessarily shared by future users, their embedding in a system may contribute to problems of acceptance when the system is fielded.

Social scientists have consistently tried to show that the embedding of tacit beliefs and expectations in knowledge-based systems has problematic consequences for the ways these systems work. However, this critique has had relatively little influence on the way systems are actually built. This may be in part because much of the social scientific critique of system-building practice has focused on something that from a computer science standpoint is not actually representation at all. Suchman, Nyce and Bader, Sachs, and I have all focused upon phenomena that fit the broad, social science definition of "representation" but not the more restrictive computer science definition. Perhaps because of this conceptual disparity, system-builders and social scientists talk past each other on this matter.

What Should Count as a User Model?

In previous work, I have asked how it is that software designers in medical informatics seem consistently to build aspects of their own world view into the systems they construct (Forsythe 1996a) . This question raised the problem of agency, since I claimed that my informants were doing something that they apparently didn't intend to do.

Building information about one's own perspective into the software is not understood as orthodox system-building procedure. In contrast, system-builders do sometimes intentionally represent information about their intended users in their software. In the natural language community of computer science, user modeling is a recognized and innovative programming technique (Moore 1994) . If we adopt the very specific and formal notion of what counts as user modeling in computer science, a small number of systems can be said to represent information about their future users.

If one adopts the broader, less formal notion of "representation" used by social scientists, however, a great many more systems appear to contain some sort of user model. Indeed, I would claim that not only all knowledge-based systems but also some (perhaps many or all) systems that are not knowledge-based embody some images of their intended audience(s).

In support of this argument, I offer case material from two software development projects on which I been a participant observer. Both are educational in intent, aiming to provide useful information to medical providers, in one case, and to patients, in the other. The first case involves a relatively simple system that contains no formal user model. This is not an intelligent or knowledge-based system: it's a collection of web pages. From a computer science standpoint, it represents no information about the user. As I will try to show, however, some major assumptions about future users are embodied in the design of this software.

The second case involves a more complex piece of software--a knowledge-based explanation system for people suffering from migraine (Buchanan, Moore, Forsythe, Carenini, Ohlsson, & Banks 1995; Forsythe 1995b; Forsythe 1996a) . This prototype system does contain a formal user model that collects and represents a limited amount of explicit information about each individual user. However, as I show in detail elsewhere (Forsythe 1996a) and will try briefly to show here, the system also embodies some notable additional tacit assumptions of the system-builders, including some about prospective users.

The genetics project is based in a medical school at an American university. Project personnel include clinical geneticists (physicians), genetic counselors, computer scientists and information scientists. Over a period of several years, I acted as an occasional evaluation consultant to the project.

The goal of this project is to put onto the Internet several hundred peer-reviewed "disease profiles." A few of these profiles are already on the Web. The profiles contain information about inherited conditions, currently available genetic tests for these conditions, implications for diagnosis and treatment, relevant literature, etc. If you have the password, these profiles are accessible with a graphical web browser such as Netscape.

The software for the genetics system does not generate unique responses for individual users. As far as I know, it contains nothing that a computer scientist would call a user model. That is, nothing is explicitly encoded about users in the system. However, it seems to me as a social scientist that the genetics system does represent information about its intended users. Some distinct assumptions and expectations about the system's anticipated audience are inscribed both in the design of the system and in the plan adopted for constructing and evaluating it. While these assumptions are simple in a conceptual sense, they say a good deal about what project members take for granted about the intended beneficiaries of the system. They also reveal what the developers see as the problem the system should address, and what they believe to be the solution to that problem.

In planning meetings, project personnel decided that the system should provide information for two audiences. The first user group was defined as genetics professionals, that is, geneticists (who have MDs and/or PhDs) and genetic counselors (who typically have master's level training and are responsible for counseling and educating patients and families about genetic issues). The second user group was defined as everybody else in the medical world who might encounter patients and families with genetic concerns. Audience #2 includes primary care physicians as well as all specialist physicians who do not work in genetics. It also includes nurses who work in primary care and in all specialist settings that are not defined as "genetic." The genetics project plans to produce every disease profile in two versions, one for each audience. Both versions of each disease profile will be created by a clinical geneticist specializing in that condition.

To project personnel, this is a simple and obvious approach to creating the genetics system. From their perspective, it is a matter of common sense that the two audiences should be defined as they are; that there should be one version of each disease profile for each audience, and that these profiles should be produced by geneticists.

In contrast, it seems to me that this plan incorporates some significant and untested assumptions about future users. First, in defining Audience #1, this design treats clinical geneticists and genetic counselors as homogeneous, assuming their information needs to be the same. In practical terms, since the disease profiles are to be constructed by geneticists, it assumes that genetic counselors have the same information needs and the same points of reference as geneticists. This seems to me to be an assumption worth checking. Genetic counselors do a good deal of work with patients and families on their own; their training and practice are not identical to those of physicians. However, this assumption is not going to be tested systematically. In its preliminary stages, the project did carry out an informal information needs survey--but only of geneticists.

Second, in defining Audience #2, this design treats all other medical practitioners as homogeneous. All physicians but geneticists--primary care providers, dermatologists, plastic surgeons, psychiatrists, and so on--are assumed to have the same information needs and the same points of reference with respect to genetic issues. Furthermore, the information needs and perspectives of all nurses are assumed to be the same with respect to genetic questions, whether they work in primary care or non-genetics specialty settings. And finally, the information needs and perspectives of all these physicians are taken to be the same as those of all these nurses with respect to genetic issues. Again, no information needs assessment of any of these practitioners is planned, presumably because project personnel don't see in this design any particular assumption. Thus, the design of the system rests on the assumption that a single body of information at a single level of difficulty will be appropriate and comprehensible for everyone in Audience #2.

And third, since geneticists will construct all the profiles, it is assumed that geneticists know what questions will arise on the part of this large and diverse second group of users and that they know as well what would constitute a useful answer for every part of this audience.

While the genetics system contains no user model, then, its design certainly does embody some beliefs about the system's intended users. It also tells us something about the system's builders. The definition of audiences divides the medical world into genetics professionals and everyone else. My impression is that this we/they distinction mirrors the way clinical geneticists tend to view the medical world.

One interesting feature about this design is its deletion of genetic counselors. Clinical geneticists and genetic counselors work together, but their practice is not identical. Why should their information needs be assumed to be the same? It appears that from the standpoint of physicians in genetics, genetic counselors do work that is not distinct from their own. In reflecting this tacit assumption, the design represents something about the power hierarchy in the field of genetics.

A second interesting feature is the assumption of homogeneity in Audience #2. While it seems surprising to assume that genetic counselors and clinical geneticists would necessarily have the same questions about genetic material, it staggers the imagination to make the same assumption about everyone else in the medical world. This assumption, apparently unremarkable to clinical geneticists, is being replicated in the design of technology ostensibly intended to meet the needs of people about whose needs they seem to know very little--and do not plan to check before constructing the disease profiles.

In various ways, then, the design of the genetics system reflects two striking assumptions:

As a consultant to the genetics project, I repeatedly pointed out that assumptions were being made that they might want to test. Among other things, these assumptions raised a methodological problem: how to evaluate the utility of the genetics system, given the enormous diversity of the residual category defined as Audience #2?. The project only considered funding several weeks of fieldwork for this evaluation--so which users should be targeted? Team members listened politely to my comments, but did not seem to understand what I was saying. To them it was apparently obvious that geneticists are the best authorities on what other people need to know about genetic problems, and that in their lack of genetic expertise, non-geneticists are pretty much the same. They did not seem to see as meaningful the notion that one might want to investigate the way various categories of non-geneticists think about genetic issues in order to know how to formulate explanations that might make sense to them.

I have argued that some tacit assumptions about users are inscribed in the design of the genetics software. As an instance of representation, this example is fairly simple. However, the assumptions in question have contributed to design decisions that will affect what goes in the system and the terms in which this material will be presented. These things in turn are bound to affect which future users are likely to find the disease profiles comprehensible and useful. So does the genetics system represent a model of its users?

The migraine project was a collaborative, three-year endeavor to design and build a patient education system for people with migraine (Buchanan, et al. 1995) . The project produced a prototype natural language system intended to empower migraine sufferers by providing them with information about their condition and its treatment. Designed to be used by both patients and doctors (or other healthcare providers), the system consists of two linked components: the History-taker and the Explanation system. The development team anticipated that the system would be used as described in the following scenario:

A new headache patient comes into the doctor's waiting room and is invited to sit down at the computer and begin using the system. First, the History-taking Module presents an automated questionnaire, which takes a detailed initial history of the patient's headaches. Based on the user's responses, the system prints out a written summary for the neurologist to use when seeing the patient. Before subsequent visits, the system takes a shorter update history, again producing a written summary for the physician. Following each encounter with the doctor, the Explanation Module of the system is available for patients to use to pursue questions of interest to them. Since the History-taker sends information to the Explanation Module, which also receives up-to-date information about the patient's diagnosis and medications from the physician, the system is able to provide on-screen explanatory material that is tailored to each individual patient. This material contains information about the patient's condition and current medications, offering further explanation as desired. In addition, it includes some general (i.e., non-tailored) information about the experience and treatment of migraine on topics seldom addressed explicitly by doctors (Forsythe 1995b) . Several levels of explanation on each topic are offered, activated when the user clicks with the computer's mouse on words and phrases of interest in each text screen. When the patient is finished using the system, the computer prints out a record of the explanatory material presented that day for the patient to take home.

When a user sits down at the prototype system, the History-taker, an automated questionnaire, inquires about some basic demographic characteristics (age, sex, menstrual history in the case of female users). It then generates a long series of questions about the user's headaches, headache history, past and present medications, known migraine triggers, etc. Based on each user's responses to these questions, the History-taker progressively builds a user model, using that model to adapt later questions to the user. For example, since migraine is often related to hormonal cycles, women are asked a series of questions about their menstrual history. These questions are adapted by age: women past menopause are asked somewhat different questions than younger ones. Male users are not asked these questions at all.

After the user has finished filling out the headache questionnaire, the system offers various kinds of information about migraine, again adapted to the individual. In addition to "remembering" the user's gender and age in generating educational information, the system also takes the following factors into account: the user's particular migraine symptoms, her dietary and other migraine triggers, identity of her physician, and medication(s) and dosage(s) prescribed during her most recent visit to the doctor. On each occasion, the system also "remembers" what it has previously told the user.

Different users see somewhat different screen displays when they use the migraine system. In response to user and provider input, the system puts up some questions or pieces of explanatory material while suppressing others, adapting individual pieces of text to what it "knows" about the user and the user's medical condition. What enables this tailoring to take place is the user model encoded in the system.

Barring technical deficiencies in my own understanding, what I have described so far corresponds to the system-builders' own understanding of what the migraine system explicitly represents about users. Returning now to the theme of this paper, I want to point out that the system also tacitly represents a number of assumptions about its users (as well as assumptions about various other things) (Forsythe 1996a) . These other assumptions about users are not part of the formal user model and were never acknowledged in my hearing by the system-builders as part of the system design. Indeed, I doubt that the migraine team members would agree that they have built such additional assumptions into the system. A number of these tacit assumptions are detailed elsewhere (Forsythe 1996a) . Here I will describe one feature of the system and point out the beliefs about users tacitly inscribed in this design feature.

As is conventional in the natural language community within computer science, the material produced by the system's Explanation module is explicitly designed to persuade the user to believe certain things. This stance is reflected in the system-builders' description of the system's text-planning architecture, excerpted below:

The explanation planning process begins when a communicative goal (e.g., "make the hearer believe that the diagnosis is migraine," "make the hearer know about the side effects of Inderal") is posted to the text planner. A communicative goal represents the effect(s) that the explanation is intended to have on the patient's knowledge or goals. (Buchanan, et al. 1995, p. 132) (original emphasis)

In order to try to persuade headache patients to believe certain things deemed appropriate by the designers, the system presents information to them selectively. For example, the initial paragraph of a piece of explanatory text provided (in tailored form) to each user defines migraine in the general case by listing the specific symptoms selected by that user while taking the headache questionnaire presented by the History-taker. Thus, if Mrs. Jones experiences flashes, light spots, double vision, blurred vision, photo phobia and painful eye movements in connection with her headaches, the patient summary subsequently constructed for her by the Explanation Module will list these as typical symptoms of migraine. Other common symptoms of migraine that Mrs. Jones did not select on the questionnaire are suppressed by the system and will not appear in the general description of the condition shown to her. This design feature is intended to "make [Mrs. Jones] believe that the diagnosis is migraine."

This feature of the system has been designed to persuade patients that the physician's diagnosis of their headaches is correct. This aspect of the design reflect some major assumptions. First, and most obviously, it assumes that the user of the system does indeed have migraine. Although no test exists to verify a diagnosis of migraine and although one projected use of the system is in the office of physicians who have not had neurological training, the designers simply took for granted that the physician's diagnosis would be correct. This is a significant assumption: the definition of "migraine" is contested even within neurology (Solomon & Fraccaro 1991) and the condition itself is often misdiagnosed (Sacks 1985; Stang & Osterhaus 1993) . In the worst case, some users of the system might have a brain tumor instead of migraine, or (like the first patient to use our system experimentally in a headache clinic) may have both migraine and a brain tumor.

Second, by omitting from the description of "typical migraine symptoms" those symptoms that a particular user does not experience, the system withholds information that might help an incorrectly diagnosed patient to realize that her physician has made a mistake. This covert tailoring of information in a supposedly neutral educational system reflects the apparent assumption that it is more important to persuade patients to believe their physician's diagnosis than it is to present them with information that might serve as a basis for doubting that diagnosis. Presumably influenced by the neurologists who served as "experts" on the project, the designers appear to have taken for granted that the interests of patients using the system are less important than the interests--or maybe the egos--of doctors. While this assumption was never made explicit at project meetings, and may not even have occurred consciously to the system-builders, it is manifest in the architectural logic of the system they created.

As I have shown, the migraine system explicitly collects certain information about users that is explicitly encoded by the system's user-modeling capability. In addition, the system embodies certain tacit assumptions about users that are nowhere explicitly encoded. Clearly, there is more information about users represented in the migraine system than is contained in the formal user model.

Discussion

In this paper, I have raised issues on several levels. First, I have tried to unpack the concept of representation as it is understood in two different academic worlds, computer science and anthropology. Among social scientists, notions such as simulation, modeling, and imaging evoke representational modes that may be quite broad and informal. To software designers, in contrast, "representation" implies an act that is both formal and specific: the explicit encoding of information in a programming language.

Second, drawing upon long-term participant observation on system-building practice in medical informatics, I have tried to demonstrate some practical implications of this definitional distinction with respect to the question of when software can be said to embody representations of its intended audience(s). From a computer science standpoint, such representations are embedded in only a small minority of systems, those containing a formal user model. In contrast, adopting a broader anthropological notion of representation, I would argue that many more and perhaps all such systems embody images of the user. Whether or not designers see or intend this, their daily design decisions routinely reflect tacit as well as explicit assumptions about the identity and characteristics of future users.

Third, as I have noted, the existence of this second, possibly unintentional form of representation in system design raises issues of epistemology and agency. It also raises ethical questions about how people's interests may be affected by their use of medical and other software. Shouldn't users know what the designers of a given piece of software have taken for granted about them? For example, shouldn't they be told that a system ostensibly intended to empower them includes features that may actually do the opposite? I believe that users should be made aware of the assumptions embedded in the systems they use. However, the tacit nature of many of these assumptions raises a practical problem--if designers are not conscious of their own cultural and disciplinary assumptions, how can they be expected to bring them to the attention of users?

And finally, I have offered a reflexive comment upon epistemological practice in science and technology studies. Within STS, we often take a common sense approach to such notions such as simulation, modeling, and representation. Common sense implies that in any given setting, what should count as representation will be obvious. However, as I have tried to show, this is not necessarily the case. While both my informants and I believe that we know representation when we see it, the underlying notions that inform this belief are different. These differences in meaning are not idiosyncratic: my informants' shared notion of representation is normal in computer science and medical informatics; that which I take for granted seems to be shared by other anthropologists of science and technology. Thus, we have here a disciplinary difference in "common sense" understandings that I--following Geertz--would call cultural (Geertz 1973; Geertz 1983) . Such differences in underlying meanings add complexity to apparently straightforward questions. To fail to pay attention to this complexity is to miss important dimensions of both our informants' practice and our own.

References

Buchanan, Bruce G., Johanna D. Moore, Diana E. Forsythe, Giuseppe Carenini, Stellan Ohlsson, and Gordon Banks 1995 An Intelligent Interactive System for Delivering Individualized Information to Patients. Artificial Intelligence in Medicine 7(2):117-154.

Forsythe, Diana E. 1992 Blaming the User in Medical Informatics: The Cultural Nature of Scientific Practice. Knowledge and Society 9:95-111.

Forsythe, Diana E. 1993a The Construction of Work in Artificial Intelligence. Science, Technology and Human Values 18(4):460-479 (Errata for this paper appeared in this journal, 19(1), 1994:120).

Forsythe, Diana E. 1993b Engineering Knowledge: The Construction of Knowledge in Artificial Intelligence. Social Studies of Science 23(3):445-477.

Forsythe, D.E. 1994 STS (Re)constructs Anthropology. Social Studies of Science 24(1):113-123.

Forsythe, Diana E. 1995a Ethics and Politics of Studying Up. Presented to American Anthropological Association, Washington, D.C., November.

Forsythe, Diana E. 1995b Using Ethnography in the Design of an Explanation System. Expert Systems With Applications 8(4):403-417.

Forsythe, Diana E. 1996a New Bottles, Old Wine: Hidden Cultural Assumptions in a Computerized Explanation System for Migraine Sufferers. Medical Anthropology Quarterly 10(4):551-574.

Forsythe, Diana E. 1996b Studying Those Who Study Us: Medical Informatics Appropriates Ethnography. Presented to American Anthropological Association, San Francisco, November.

Geertz, Clifford 1973 The Interpretation of Cultures. New York: Basic Books.

Geertz, Clifford 1983 Local Knowledge: Further Essays in Interpretive Anthropology. New York: Basic Books.

Gusterson, H. 1992 The Rituals of Science: Comments on Abir-Am. Social Epistemology 6(4):373-379.

Hess, David, and Linda Layne, eds. 1992 The Anthropology of Science and Technology. Greenwich, CT: JAI Press, Inc.

Lynch, M., and S. Woolgar 1990 Introduction: Sociological Orientation to Representational Practice in Science. In Representation in Scientific Practice. M. Lynch and S. Woolgar, eds. Pp. 1-18. Cambridge, MA: MIT Press.

Moore, Johanna D. 1994 Participating in Explanatory Dialogues: Interpreting and Responding to Questions in Context. Cambridge, MA: MIT Press.

Nyce, James M, and Gail Bader 1993 Fri att valja? Hierarki, individualism och hypermedia vid tva amerikanska gymnasier (Hierarchy, Individualism and Hypermedia in Two American High Schools). In Brus over Landet. Om Informationsoverflodet, kunskapen och Manniskan. L. Ingelstam and L. Sturesson, eds. Pp. 247-259. Stockholm: Carlsson.

Nyce, James M., and Jonas Lowgren 1995 Towards Foundational Analysis in Human Computer Interaction. In Social and Interactional Dimensions of Human-Computer Interfaces. P. J. Thomas, ed. Pp. 37-47. Cambridge: Cambridge University Press.

Padget, Marianne A. 1988 The Unity of Mistakes: A Phenomenological Interpretation of Medical Work. Philadelphia, PA: Temple University Press.

Padget, Marianne A. 1993 A Complex Sorrow: Reflections on Cancer and an Abbreviated Life. Philadelphia, PA: Temple University Press.

Sachs, Patricia in press Thinking Through Technology: The Relationship of Technology and Knowledge at Work. Information, Technology and People. Portland, Oregon: Northwind Press.

Sacks, O. 1985 Migraine: Understanding a Common Disorder. Berkeley: University of California Press.

Solomon, S., and S. Fraccaro 1991 The Headache Book. Effective Treatments to Prevent Headaches and Relieve Pain. Yonkers, NY: Consumer Reports Books.

Stang, PE, and JT Osterhaus 1993 Impact of Migraine in the United States: Data from the National Health Interview Survey. Headache 33(1):29-35.

Suchman, Lucy A. 1987 Plans and Situated Actions. Cambridge, England: Cambridge University Press.

Suchman, Lucy A. 1992 Technologies of Accountability: On Lizards and Airplanes. In Technology in Working Order: Studies of Work, Interaction and Technology. G. Button, ed. Pp. 113-126. London: Routledge.

Traweek, Sharon 1988 Beamtimes and Lifetimes: The World of High Energy Physicists. Cambridge, MA: Harvard University Press.

Traweek, Sharon 1992 Border Crossings: Narrative Strategies in Science Studies and among Physicists in Tsukuba Science City, Japan. In Science as Practice and Culture. A. Pickering, eds. Pp. 429-465. Chicago: University of Chicago Press.

Traweek, Sharon 1993 An Introduction to Cultural and Social Studies of Sciences and Technologies. Culture, Medicine and Psychiatry 17:3-25.

Wexler, Alice 1995 Mapping Fate: A Memoir of Family, Risk, and Genetic Research. Berkeley: University of California Press.

Woolgar, S., ed. 1988 Knowledge and Reflexivity. London: Sage Publications.

[A]cknowledgement

An earlier version of this paper was presented to the Workshop on Simulating Knowledge: Cultural Analysis of Computer Modeling in the Life Sciences, Cornell University, 19-21 April 1996. I thank the organizer, Stefan Helmreich, and the conference participants. For helpful comments on this paper, I am grateful to Susan Kelly and Sigrid Mueller.

Notes

1 In contrast to practitioners of less relativist disciplines, anthropologists tend not to see any categories as stable and universal.

[2] People from a variety of STS disciplines are clearly aware of the potential utility of ethnographic techniques from anthropology; note the recent discussion over the appropriation of ethnography by STS practitioners (Forsythe 1994; Gusterson 1992) . When such people attempt to borrow ethnographic techniques, however, they tend not to borrow the epistemological stance that anthropologists see as intrinsic to good ethnographic practice and which requires careful attention to the problem of categories. Compare comments by Nyce and Lowgren about a similar pattern in the HCI world (Nyce & Lowgren 1995) and by Forsythe about the pattern in medical informatics (Forsythe 1996b) .

[3] As Geertz suggests, and as I have discussed elsewhere (Forsythe 1993a; Forsythe 1993b) , shared assumptions within disciplines ("intellectual villages" (Geertz 1983, p. 157) ) can operate in a way that anthropologists describe as cultural.

[4] Some STS researchers have played with the notion of reflexivity but do not seem to have incorporated it as a serious analytical tool (e.g., (Woolgar 1988) ). In contrast, reflexivity has been used very effectively in their work by some other investigators of science, technology, and medicine (e.g., (Padget 1988; Padget 1993; Traweek 1988; Traweek 1992; Wexler 1995) ).

[5] Some material in this section is taken from (Forsythe 1996a) .