From: sethf@MIT.EDU Date: Fri, 23 Dec 1994 15:28:28 -0500 To: gjackson@MIT.EDU Subject: More comments about your stopit article Since you were called away the other day in the middle of our conversation about your article, I thought I'd write up and clarify a few of my thoughts about it. My message turned out to be quite long, but I hope you find it interesting. Please take these comments as constructive criticism. I deeply value the freedom that exists at MIT, but rather than being complacent about them, it inspires me to use that freedom to be vigilant about infringements and work toward improving the situation where it falls short of the ideal. > Crime & Punishment, or the Golden Rule? ^^ A very laudable formulation. Would that it were true! Unfortunately, the article does not describe that. It would be more accurately entitled: "Every Case a Jury Trial, or Plea Bargaining?" > Is there a computer cluster somewhere where someone can be > safe from pornography and harassment? I'm sick of this. ^^^^^^^^^^^^^^^^^^^^^ In the very first sentence, the phrasing is loaded. What's said here is more akin to "Is there a place where I can be guaranteed not to see anything I find offensive?" > male student. His screen was displaying a graphic image of a sexual > act. Judy asked the student to remove the image, since it was > interfering with her ability to work comfortably. He refused - loudly > ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Was the image on her screen? If so, it was truly interfering with her ability to work. No, what was happening was in fact "she didn't like what he was viewing". > He refused - loudly and contentiously. After a shouting match, Judy Note that "loudly and contentiously" is attached to the description of the refusal, but the request is described is neutral terms "Judy asked the student ...". How about a case where "Judy accused the student of offending and harassing her. He told her he hadn't spoken to her or looked at her, and so did not think her request justified". > On its face this case involves a conflict between rights: Judy has a > right to use facilities without interference, the male student has a > right to display whatever images he wishes, and both cannot exercise > their rights without conflict. There is a standard Free Speech answer to this: "There is no right not to be offended". It's not a conflict of rights at all. She is not experiencing "interference". She is experiencing "offense". That is a a whole different category. > personal computers in a public cluster are an oxymoron - the user > doesn't know whether to treat them as shared or private. When > different users answer the question differently, conflict ensues. No, the conflict arises because there is a broad and wide-ranging effort to restrict what can be read or viewed or written. It embraces both public and private contexts, but because there are indeed more justifiable restrictions about conduct in public, the argument in this case becomes to extend the scope of regulations. People subject to these attacks sometimes dispute them by arguing that the context should be socially regarded as private. For example consider a personal conversation, which is socially regarded as "private" even though it may take place in "public". But "How to treat computer clusters" is not the root question. It's more an instance of "How far can someone's feelings about what you are publically reading, viewing, writing, restrict that?". This is very obvious is you think about, say, reading _Playboy_ at a restaurant or at work. These are real cases, the former is discussed by Nat Hentoff his book _Free Speech For Me But Not For Thee", and the latter was recently litigated in a lawsuit in California (the decision was that the First Amendment protects "quiet possession, reading and consensual sharing" of such material, but not "physical or verbal flaunting"). > One approach to problems of this sort, typical of college and > university disciplinary mechanisms, involves judgments and sanctions. > The judgment-and-sanctions approach takes a long time, entails > standards of evidence, and requires answers to difficult questions. Right. I call this "the courts". > Most people I have told about Judy Hamilton say that she should win, > that her rights outweigh the male student's because the incident > involved public facilities and her goal was learning whereas his was > titillation. If you've told mostly administrators and bureaucrats (i.e., people who deal with implementing control and authority), or people who in general would hardly want to be seen as defending "porn", this result is not surprising. Try asking some civil-libertarians, Free-Speech types, Libertarians, and so on. But rights are, gratefully, not determined by majority vote (in fact, they are meant as a barrier to what "most people" feel). Here's one person who thinks she should lose, because it is not a conflict of rights at all (again, there is no right not to be offended). And who says "her goal is learning whereas his was titillation"? What loaded language! How about "her goal was grinding her political ax, whereas he just wanted to be left in peace"? > But what, for example, if the display had been on a more > personal computer, say a laptop? What if the context had been a > cafeteria, rather than an academic facility? What if the image had Exactly. When can people tell you "You can't read that"?. > been a swastika, rather than a sexual image? What if the screen had > displayed an anti-Semitic quotation from the published writings of > Harvard's President Lowell (once an MIT faculty member) in large type? > In small type? What if had been a Rubens painting? What if had been a > fraternity's name? And to who and what can they do it do? > What if there were other workstations available, > from which the male student's display could not be seen? And can they do it just when they are personally concerned, or in the name of society in general? > A judgment-and-sanctions approach revisits the problem for each > variation. It creates a stream of acrimonious, difficult-to-decide > cases, consigning policy evolution, social learning, and ethical > progress to the indefinite future. Yes. "Hard cases make hard law". These *are* difficult questions. > And it leaves administrators without an efficient, commonsensical > response to complaints like Judy's. The commonsense reply is "I'm not a lawyer, and not authorized to make legal decisions". Unless they are authorized to do so, in which case that's why their job is tough. > A different approach to problems of this sort frames the issues not in > terms of competing rights, but rather in terms of balancing interests. Now we get to the meat of the approach. It's established that court cases are messy and difficult to deal with (I completely agree). The phrase "balancing interests" sounds like "competing rights", so should be soothing, but it becomes apparently later than it is in fact the ancient argument of "You have a right as long as you don't ``abuse'' it by actually exercising it". > Judy and the male student both have an interest in continuing > availability of public facilities, whose existence depends on a social > contract to respect each other as citizens sharing limited common > resources. Each has a right to do things that might offend others. ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ So far, so good, but soon comes the kicker ... > Yet it is in each of their interests not to exercise that right at all > times, lest their competition jeopardize their common interests by > causing administrators to restrict or eliminate public facilities. Bingo. "Exercise your rights too much, they might go away. So don't push it". In Free-Speech theory, this is known as "chilling effect" (not to be confused with "chilly climate", a term of "hostile environment" harassment ideology). > From an interests perspective this problem requires an approach that > helps each citizen to consider the interests of others. > A judgment-and-sanctions approach might serve this end, especially if it > involved the equivalent of public hangings. But we believe there are > more efficient approaches focused on averting and changing > inappropriate behavior rather than punishing it. I will return to this > point, but only after another case. This article DOES describe a judgment-and-sanctions approach. You clearly do need to judge which complaints are valid, and do so in practice. Then later on, a series of increasing sanctions are detailed. At the first level, someone gets a cease-and-desist order. After that, they get a lecture. If these two don't work, they get to go to "court". This is a judgment-and-sanctions system. But it's a *SOPHISTICATED* judgment-and-sanctions system. Every case is not heard in court, rather the defendant gets an opportunity to "plea bargain" (really more analogous to plead "no contest"). If they stop when they receive the first level of sanction, others are not employed. This is, I think you argue, a much better idea (more "efficient") than having a full trial for every incident - that's why the whole thing is "plea bargaining" overall. The defendant avoids more serious penalties, the prosecution avoids having to actually prove its case. But it focuses on "averting and changing inappropriate behavior" through punishment. All you are saying is that it's a very good idea to start off with threats and light punishments. The Golden Rule, "Do unto others as you would have them do unto you", gives very different results. My view of applying it to this case would be "I don't complain about what's on your screen, you don't complain about what's on mine". Obviously, you have something very different in mind. It can't be "I'll remove anything you find offensive from my screen if you remove anything I find offensive from yours" (I find anti-freedom writings very offensive ...), that's unworkable. So what is it? "We should all be considerate"? That doesn't help, people clearly have different ideas of what qualifies. Note I think the interaction in this case in fact followed the inverse-Golden Rule ("Do unto others as you have been done unto"). She thinks he's bothering her, so she bothers him, and he then bothers her back (the shouting match). The result is lamentable, but the reciprocal action is much more a matter of two people treating each other with equal (which does not mean positive ...) consideration than anything else described in this article. [Zareny case] > solidly against censorship, but if similar remarks about the > immorality of Jews or Blacks were made, they would probably be illegal. In this country, at least, he's dead wrong. There is no group libel or hate-speech law (yet). > also think that if the hate laws covered gender orientation, he > would be in violation of the law. Again, he's just wrong about US law (See the R.A.V. vs. St. Paul decision, for example, /mit/safe/legal/rav-v-st-paul). If he's talking about his own country's law, I don't think a US citizen can be held accountable for that (lots of stuff on Usenet is illegal in Saudi Arabia, Iran, or Singapore, to name some notable examples). > An easy one, it seemed: find Mr Zareny (again, not his real name), let > him know that continuing to post offensive messages after being asked > to desist might constitute harassment, and warn him that legal action > could result. An easy one, indeed. Next to nothing said about general groups is illegal in the US, even if people ask you to stop. The personal stuff MIGHT be libel, but it would be hard to win, and WOULD NOT constitute *legal* harassment at all (unless by some stretch you have to read that netnews group for your job). If MIT tried to apply its own harassment policy, it would be putting itself in the position of punishing public publication, something I severely doubt would ever hold up when challenged. > harassment policy. MIT's policy reads as follows: > Harassment is any conduct, verbal or physical, on or off campus, > which has the intent or effect of unreasonably interfering with an > individual's or group's educational or work performance at MIT or > which creates an intimidating, hostile or offensive educational, > work or living environment. To paraphrase the saying : "I do not think those words mean what you think they mean". MIT is on record as arguing in court that what Prof. Bitran did was not harassment, and letters which were leaked indicate what happened in the Tewhey-Nolan-Shea was not ruled as harassment. The protesters who disrupted Bitran's class weren't even prosecuted (though they did get warned). Given those precedents, I think the above needs to be construed VERY narrowly to be consistent (which doesn't stop some people from trying to make it broad but selectively enforced) [See /mit/safe/cases/erulkar-bitran, and /mit/safe/cases/tewhey particularly the file official-letters]. > In practice we apply three tests to determine whether given behavior > constitutes harassment. We post these questions for all to see on > large posters in all computer clusters: > Is it harassment? Ask yourself these three questions: > * Did the incident cause stress that affected your ability, or the > ability of others, to work or study? > * Was it unwelcome behavior? > * Would a reasonable person of your gender/race/religion subjected > to this behavior find it unacceptable? These question do next to nothing to filter out the frivolous or trivial complaints. Anyone can say "It caused me horrible stress, it was unwelcome" (isn't the second practically subsumed in the first one?), and who, making a complaint, is going to say that they are an unreasonable person of their group? I would suggest replacing the questions with something along the lines of "Was there behavior directed at you?" "Was it so SEVERE OR PERVASIVE that it created an OBJECTIVELY hostile, or offensive, atmosphere for work or study?" "Would a reasonable person agree with you?" This isn't as good as I would like in the best of all possible worlds, but it's not my dream phrasing - it's mostly from looking at the latest Supreme Court decision on harassment, /mit/safe/legal/harris-v-forklift-sys. The key points there are that they stress a change from SUBJECTIVE to OBJECTIVE (what this means is something of a puzzler, though it gets away from the horrible consequence of subjectivity, that the accuser gets to decide what is harassment), and - direct quote - states that the standard is not "any conduct that is merely offensive". > how closely one defines the reference community. But the alumnus did > not contend that his ability to work was being affected. But if he was slicker, he could have, very easily. "When I read his posts, I am just so upset that I cannot work at all. And knowing he's saying these things, even if I don't read them, makes me a nervous wreck". He just doesn't know how to put a complaint together in the right way. > As Nero Wolfe might say, this is unsatisfactory: Mr. Zareny's behavior > clearly is unproductive. It falls short of the standard we would like ^^^^^^^^^^^^^^^^^^^^^^^ To you. He might have a very different view. > MIT students to reach. So we should communicate with him, and let him > know that flaming postings are unlikely to win him friends. A Umm, long-time flamers tend to know this. Especially on netnews. > As we pursued this point the case grew interesting: there was no "M Deus ex machina. The merits of the case didn't need to be argued, it was settled on procedural matters. *Shrug*. > The Zareny case, apparently about harassment, thus turns out to be as > much about venue and responsibility. The Internet's accessibility, > lack of authentication standards, and context-bound rules of discourse > produce a jurisdictional and evidentiary morass: an alumnus in Solid points. Electronic situations are indeed complex. > or process whatsoever, since it was unauthorized - but should we do more? No. It's hardly the responsibility of MIT to police Usenet. > The cases have some common threads: traditional judicial approaches > are either infeasible or inefficient, framing the issue in terms of > rights leads to conflict, jurisdiction can be messy, and therefore the > central need is for members of electronic communities to appreciate > their common interests in rules for behavior and use. The propositions of this paragraph are in fact disconnected. The "therefore" does not follow. "traditional judicial approaches are either infeasible or inefficient" "jurisdiction can be messy" No-one ever claimed the judiciary was efficient. I don't know about "infeasible" - networks have intensified jurisdictional problems, but they did not create them. "framing the issue in terms of rights leads to conflict," Of course. Because there IS conflict. "AND THEREFORE the central need is for members of electronic communities to appreciate their common interests in rules for behavior and use." Everyone has an interest in "rules". This isn't a "therefore". But WHICH rules? That's a POLITICAL question. Maybe the whole issues is in the word "common". It's a common interest, perhaps, but that hides the fact that the interests are NOT common (by which I mean that people sure don't AGREE on which rules to have). > Within the academic-computing and Athena side of Information Systems > we first approached computer and network misbehavior idiosyncratically. "We gave out arbitrary punishments" > As academic computing become more central to education at MIT, "This became unworkable" > and unmanageable. Moreover, they often triggered acrimonious exchanges > with perpetrators rather than productive behavior changes. And we "We didn't like arguing with the people" > became increasingly uncomfortable with our confounded roles as > rulemakers, detectives, prosecutors, judges, and corrections officers. "And it wasn't our job anyway" > We discussed this problem among ourselves, and with individuals from > the offices of the Provost, the Dean for Student Affairs, and the > Ombudsman. Out of these discussions grew a recognition that averting > and stopping antisocial and unethical behavior was sometimes more > important than punishing offenders. And out of this recognition grew "So we decided we'd really rather have people just stop making trouble for us, rather than suffering" > simple set of mechanisms designed to stop harassment and improper use > quickly, while keeping options for more traditional sanctions open. "So we came up with a wonderful idea: We'd just threaten them first, and if that worked, great. We could always haul them into court later if the threat didn't work." > The stopit mechanisms, as they came to be known, were based on a > simple proposition: > Most offenders, given the opportunity to stop uncivil behavior > without having to admit guilt, will do so. "Most people, if threatened, but given an out, will take it." > The stopit mechanisms thus were designed to do two things: to discover > computer misbehavior rapidly, and to communicate effectively with its > perpetrators. The overarching goal is just what the name suggests: to > stop it. "Rather than arguing about rights and trying to settle these difficult issues, we just get the cases off our backs." > As stopit precedents have accumulated, so have standard responses to > typical offenses. Moreover, field staff have become better attuned to > standard responses, and often are able to handle complaints completely > on the spot. This was very difficult before stopit gathered enough > data to develop, test, and implement response policies. Standard > responses and field-staff skills gradually have reduced the senior > administrative overhead associated with stopit. "We don't need judges for every case anymore. We can delegate to assistant prosecutors". > The third stopit mechanism is a carefully-structured standard note to > alleged perpetrators of harassment, improper use, or other uncivil behavior. ^^^^^^^^^^ ^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^ A commonly-seen construction: Lump together various things. What is "uncivil behavior"? Refusing to do something someone asks you to do? Refusing impolitely? > concludes with a short sentence: "If you were aware that your account > was being used to [whatever it was], then please make sure that this > does not happen again." "please make sure that this does not happen again"??? That sounds a lot like a (polite) order to me. It's nicely phrased, its pleasant, but it's unmistakably an order as opposed to a request. Now, you clearly don't send this note out for every complaint of this type (or do you? - in which case it would be more absurd). So hidden here is one of the most obnoxious aspects of over-broad policies - "it matters only that a complaint be brought to have action taken against the accused. But this isn't the whole of it, automatic action is subject to the politics of the complaint-handler". No evidence, no testimony, no "other side of the story" (that's messy, inefficient). An accusation is made, the handler approves of it, the order goes out. There needs to be some mechanism for getting rid of "fishing-expedition" complaints. As the system is set up now, there is no disincentive at all to filing a frivolous or ideological complaint. It can be done anonymously, and there isn't anything that needs to be proved. If the accused doesn't comply, the accuser is simply back where they started. This fosters a "What is there to lose?" situation. > Two interesting outcomes ensue. First, many recipients of u.y.a. notes "People try to save face" > Second, and most important, u.y.a. recipients virtually never repeat the > offending behavior. "And they usually obey the order". This is true of a lot of cease-and-desist orders in the real world also. Because everyone know it's basically "Do this, you can walk, fight us, we'll grind you up.". > This is important: even though recipients concede no guilt, and > receive no punishment, they stop. If we had to choose one lesson from > our experience with misbehavior on the MIT network, it is how > effective and efficient u.y.a. letters are. "Threats work". > They have drastically > reduced the number of confrontational debates between us and > perpetrators, while at the same time reducing the recurrence of > misbehavior. "We don't have to argue with people." > When we accuse perpetrators directly, they often assert > that their misbehavior was within their rights (which may well be > true). "If we accuse them to their face, between flight-or-fight, they often fight" > They then repeat the misbehavior to make their point and > challenge our authority. When we let them save face by pretending (if > only to themselves) that they did not do what they did, they tend to "But if we give them an impersonal out, they take it" > become more responsible citizens with their pride intact. We lose the > satisfaction of seeing perpetrators punished, but we reduce > misbehavior and gain educational effectiveness. "We don't make them suffer, but we get rid of the hassles" What is this "responsible citizens" and "educational effectiveness"? I don't see where anyone's mind has been changed, but rather, as in the case of plea-bargaining, cases have been disposed of in a much more streamlined manner. > reconfiguring machines or using restricted facilities are involved > (there have been virtually no harassment recidivists), perpetrators > perpetrate again. "On occasion, the threats don't work" > Or they respond to the u.y.a. letter by contesting > the policy in question. "They aren't intimidated. They assert rights." > In these cases the fourth stopit mechanism > comes into play: the individual is invited to discuss the matter with > a senior Information Systems administrator. "We offer to give them a lecture" > If the individual declines > this invitation, it becomes more forceful: in some cases the user's > account is temporarily frozen until her or she appears (but this only > happens with a Director's approval). "An offer they can't refuse" > In extreme cases, or if discussion fails to deter future misbehavior, ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ > the fifth stopit mechanism comes into play: the Institute's regular > disciplinary procedures. "If THAT doesn't work, THEN we send them to court" > In contrast to our earlier practice, MIT > Information Systems neither takes private action nor imposes internal > punishments (such as denying accounts, or having offenders clean > screens) outside of regular procedures. Instead, Information Systems > files complaints on behalf of itself or of victims (with their ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ > consent), and then lets the MIT Committee on Discipline (or whatever > organization is responsible) judge the case and impose penalties. "But we stopped being the court, we send them to it. They didn't plea-bargain with us, so we bring a tough case against them. They didn't take the out, it's time for the trial. But it's not our problem now, and we don't have to deal the mess of making a decision" > CRIME & PUNISHMENT, OR THE GOLDEN RULE? > Our answer is simple: the Golden Rule. NONSENSE. It's "Crime & Punishment". But it's the modern form of crime and punishment - plea-bargaining, "probation", and the like. > Attempting to reduce uncivil > behavior on the academic network by defining "crimes" and punishing > "criminals" solves only part of the problem, at the same time > prompting enough debate, backtalk, and defiance of authority to wipe > out any gains. "Don't use just the iron fist" > Attempting to reduce uncivil behavior by promoting > respect for others sharing resources, and especially by permitting > community members to change their behavior without admitting guilt, > seems to achieve our central goal: maximizing educational efficiency > by reducing the social and ethical costs of intensive academic > networking. "Put it in the velvet glove. That works far better." > But attaining our goal requires one further step, which we have yet to > take effectively at MIT. Rather than educate students about civil use > of shared academic-computing facilities only when they misbehave, we > must find ways to educate students at the outset. "But now we only threaten them when there's a complaint. We need to threaten them at the start" > Currently we provide > materials on proper use when students open accounts, and our > introductory training sessions emphasize the theme, but neither of > these traditional approaches seems to have much effect. Right. BECAUSE "proper use" is NOT obvious. Different people can have legitimate different ideas about what constitutes it. It is not at obvious what is "improper use" to read (or that anything could be), or that someone is justified in coming after you for your public writings (even if those are rude and offensive). Should someone live in fear that they might possibly, somewhere, somehow, offend someone? I don't think that would be a good mindset to try to induce. > real cases works very well. Fundamentally, what we need is two things: > for students to understand their basic social and ethical obligations > as members of a community, and for them to understand the implications > of these obligations when they use computers and networks. Yes. But again - THIS CAN DIFFER - legitimately! You explicitly try to sidestep these basic differences ("rights"), in terms of not bothering the system too much ("interests"). But this simply avoids the fact that some concepts are not very amenable to comprise. The argument over what you can read, view, or write is a old one, so it should not surprise us that it applies in such powerful communications mediums as computers and networks. The different concepts about if and how to regulate expression (blasphemy, erotica, subversion, etc ...) don't disappear when generalities are invoked. You describe a particular model, but it is very much a judgment-and-sanctions, crime-and-punishment, (minimal)rights-based model, it just includes a quite sophisticated judicial system and factors in the cost of prevailing in a dispute (two features that are very often omitted from theorizing). > Promoting civility on the academic network requires moving our goal > beyond adjudication to behavioral change and our tactics beyond > accusation to redirection. Having achieved these two transitions, we > need to move from remedial to preventive strategies if we are to > realize the full potential of networked academic communities. I don't understand this paragraph at all. It struck me in these concepts of citizenship, ethics, and social contract, that I could see no room for the concept of "He who serves the State best, opposes the State most" (attrib. Thoreau). As a specific example, consider the recent incident of the Freshman Picture book cover suppressed by the MIT President himself, for alleged offensiveness. MIT Student Association for Freedom of Expression used this picture in a political protest poster, and made a 20 x 30 blowup of the poster for use at the Academic Midway. The poster and blowup were made in a public cluster, and it attracted some stares (especially the large blowup, which need to be assembled from individual printout segments). It surely could have offended someone (after all, that was the very reason the image was suppressed!). And considering the hectic nature of that day, if someone were to ask that the poster not be made, they could very well have been refused rather impolitely. However, I regard that use of public facilities as an excellent example of undertaking "basic social and ethical obligations as members of a community", and exactly what an academic institution should encourage. Yet it would certainly be considered rude and uncivil by some. But to appease these people, in effect one would have to severely cripple the ability to engage in political dissent. This hardly seems any sort of citizenship to me. If someone had taken it into their mind to play the line of "seeing-that-picture-upsets-me-so-much-I-can't-work", and registered a complaint with stopit, would a warning letter have been sent to the poster-makers? Or even an agent dispatched to "inform" (i.e. threaten) about harassment charges? Note it can be said to pass all three "questions" - the accuser can easily say they were deeply upset by seeing the poster and that it was unwelcome, and the image in it was ordered suppressed by the MIT President, so I suppose it would be said to have "reasonable" offense by definition. Suppose someone uses the poster as their background image? Does that warrant a notice? If so, you've just created a very strange situation, where someone can display a political poster in a hallway, but not on there own screen (or maybe it's banned on both, in which case the squelching of dissent is clear). Note this path is in effect re-creating the "redeeming value" test of obscenity law. But the formulation of "offensive ... environment" contains no such broad protections, and the identification of good behavior with being inoffensive similarly has no room for such politics. Well, this turned out to be very long. I'm deeply concerned about this issue, and it shows. I hope you find the criticism cogent enough to induce you to revisit some of your ideas. I'd much rather see MIT be on the forefront of defending electronic freedoms than an innovator in mechanisms to finesse the issues. ================ Seth Finkelstein sethf@mit.edu