Introduction for cis492, spring 2002 M. van Swaay Build the scaffold, construct the lattice, weave the net. That is the assignment for the first lecture, or maybe the first pair of lectures. At the end, the structure will be full of holes, which have to be filled in the remaining sessions. But the structure has to be sound, and it has to be complete, before the hole-filling can begin. It is quite a challenge; I hope to rise to it. Ethics: covers not obedience to some set of rules, but consistency and 'moral sense' in the formation of choices/decisions. Choice implies freedom to choose. But that 'freedom' is not unbounded: some - implicit and often unstated - set of rules is generally acknowledged. Fletcher Moulton: Obedience to the Unenforceable. Why did you do X? Because I felt like it. Because my boss told me to. Because the law says so. Because I concluded it was the right thing to do. Why did you conclude that? Because I can argue for X from Y. Why then Y? Persistent questioning of 'why' must eventually lead to an inescapable answer of 'because'. At that level further explanation is both recognized as unnecessary, and admitted as unavailable. In mathematics that level is the place where the axioms are found. Axioms, by definition, are beyond challenge. But are they really? In math we know about Euclidian geometry, but also about non-Euclidian geometry. We have no trouble with that, because math is a domain that 'does not include us': we look at it 'from the outside'. Axioms in the domain of life are not so clearcut and concrete. In math the axioms can be seen as starting premises 'agreed on by mathematicians', on which a structure can then be built. The notion of 'agreement' does not have to exist in the math domain; we rely on it in the human domain to reach agreement on the math axioms. But we cannot define the 'human axioms' by negotiation or agreement: that would imply that both negotiation and agreement are available. We would then have to classify those as axioms, and they must have come into existence without negotiation or agreement. So the 'human axioms' have to pre-exist, before anybody started to think or talk about them. No logical argument can return an answer to the question where they came from. If the human axioms cannot result from negotiation, are they then non-negotiable? That is a profoundly disturbing question. If they are truly non-negotiable, i.e. dogmatic, then we are stuck with them, and we have no argument, or even reason for hope, that the dogma will be shared by others. Observation gives us reason to see the world in far less bleak light. We find that the vast majority of humankind is inclined to sympathy and cooperation. That is precisely the reason why deviations make the headlines. We can speculate and fantasize about the origin of that good luck that appears to be built into humanity. Matt Ridley does exactly that, in his book 'the origins of virtue'. James Q. Wilson approaches the same question from a different angle in his book 'the moral sense'. Kant tries to approach the question philosophically, and arrives at his categorical imperative. But Kant bases his argument on a premise of reason, without answering why 'reason' has to be admitted as a premise. Various religions oblige their followers by declaring the answer as 'god-given', and therefore outside the responsibility of the individual. I propose to dispose of the question where the axioms come from by placing that question in the spiritual domain, and then declaring that domain to be outside the domain of this course. Lest you see this as a copout: I do have a personal belief that lets me be at peace with the question, but I neither can nor want to try to make others share that belief. You have to think that through for yourself. In this context a short comment may be in order, about the nature of faith and religion. I propose to make a distinction between the two terms: in my view faith answers what cannot be reasoned. Faith then has to be personal. Religion is an expressed 'common denominator' for the personal faiths of a group of people, possibly a very large group. The advice that you 'think it through for yourself' reveals that 'axiom' is not the best word to use ... but I have no better. To the outside world one's personal faith is normally considered beyond negotiation. But not necessarily to its 'owner'. I doubt that anyone would challenge the observation that each of us 'matures in faith', and, by implication, challenges his own faith as long as he lives. I submit that a faith that cannot be reconciled with 'being human' cannot survive internal challenge. Of course that raises the question what 'being human' means .... In contrast to faith, religion will have to rest on some expressed dogma, even if that dogma may itself declare tolerance and respect for those who do not subscribe to it. But the idea of religion as a belief that does not allow challenge, even though it can admit other beliefs, makes religion a very tempting cloak in which to wrap fanaticism, in the hope that the refusal to admit challenge will protect the fanaticism. Having admitted to the existence, even necessity, of axioms, can we find reason to hope that they may extend beyond a single person, possibly to all of humanity? Again, that question is not open to proof, but we can make some observations. One of those I made earlier: empirically we find that uncooperative behavior is almost universally seen as abnormal. More reassuringly, theologists tell me that the 'golden rule' is common to all major religions, and to at least one of the major philosophies. Without attempt at proof I submit that the golden rule may well be sufficient, and that we can persuasively argue that it is universal. That does not mean that every person in the world will abide by it, but it does mean that the few who do not will be seen as not only abnormal but also without claim to approval. After all, violation of the golden rule implies asymmetry: 'I am allowed to impose on you, but not the other way around." The suggestion that the golden rule may serve as the core of a universal ethical framework rests in part on an observation: 'there appear to be no arguments against it', and of course in part on belief: 'it should serve'. Just as geometry is vastly more than Euclid's axioms, human life is vastly more than the golden rule. So we need to build some structure beyond the bare axiom(s). We can call those structures 'models of ethics'. Some history is in order here, to encourage some modesty, and to avoid much frustration. People have chewed on ethics questions as long as recorded history. Some say, maybe a bit facetiously, that Plato said it all, and everything after that is just footnotes to his work. The mere fact that today we recognize at least three major models of ethics is evidence that we don't really 'know' yet. That, in turn, leads to the recognition that 'the model of ethics' may not be knowable. But that should not stop us from thinking about it. Ethics models: duty-based, rights-based, consequence-based, relativistic. The U.S. attaches much weight to one more anchor, and rightly so: its founding documents, the Declaration of Independence and the Constitution and its Amendments. I believe these documents derive their strength not merely from the skill and wisdom of those who designed and wrote them, but also from the fact that they reflect the underlying bedrock, and are in tune with it. Without claiming standing as a constitutional scholar, or even a legal scholar, I do want to say a few words about some parts of the Constitution, not to challenge or undermine its stature, but to encourage thought about its place and its meaning. Allow me to throw what appears to be a brickbat, and then argue why it neither is nor aspires to be a brickbat. I submit that the name 'Bill Of Rights' - but not its content! - is misleading at best, and possibly wrong. The preamble to it makes it quite clear that the intent is not so much the assignment of rights as the imposition of restraint on the government: THE conventions of a number of the States, having at the time of their adopting the Constitution, expressed a desire, in order to prevent misconstruction or abuse of its powers, that further declaratory and restrictive clauses should be added, and as extending the ground of public confidence in the government will best insure the beneficent ends of its institution, RESOLVED by the Senate and House of Representatives of the United States of America in Congress assembled, two-thirds of both Houses concurring, that the following Articles be proposed to the Legislatures of the several States as Amendments to the Constitution of the United States, all or any of which Articles, when ratified by three-fourths of the said Legislatures, to be valid to all intents and purposes as part of the said Constitution, viz. ARTICLES in addition to, and amendment of, the Constitution of the United States of America, proposed by Congress, and ratified by the Legislatures of the several States, pursuant to the fifth Article of the original Constitution. In other words, what would become known as the Bill Of Rights was explicitly designed to impose restraints on the government. It casts a revealing light on the claim of 'I stand on my First-Amendment Rights'. The government has no obligation to defend your choice to spout off at a streetcorner or over the internet. The government merely cannot forbid you from making a fool of yourself, or worse. This may be a good place to warn you about one of the more invidious tricks of the debater - and the activist: the misuse of words that have multiple meaning, and the export of terms beyond the domain in which they are defined. Physicists invariably cringe at the casual use of 'energy' by weather forecasters and advocates of alternative medicine. The word 'right' has two very different meanings. The first is the meaning we find in the Declaration of Independence, and in the Declaration of Human Rights. The second we encounter as 'Miranda Rights', 'first right of refusal' and such. These rights are generally constructed by negotiation and legislation. The 'Inalienable Rights' of the Declaration of Independence pre-exist as part of 'being human'. One would think that the Founding Documents leave little room for relativism, but that has not stopped a well-recognized trend toward "I'm OK you're OK." Admittedly, there is sound value in the Indian advice that one should not judge another human being without first having walked a mile in his mocassins. But it does not intend to say that judgment is 'inappropriate': it intends to say that one should judge conscientiously, and with consideration. I find it ironic that the current fashion that frowns on 'being judgmental' is rarely challenged for its internal contradiction: making judgment is on its face judged inappropriate .... So let us review the four major ethics models I referred to earlier. The duty-based model (deontological model) rests on the premise that some fundamental duties exist for all. The model is mute about where those duties come from, and is mute about what happens when they are ignored. Apart from what one might call 'spiritual duties', the duties in this model consist largely of obligations to one's fellow humans. Under that view the rights-based model, at least under one understanding of the term 'right', is merely the mirror image of the duty-based model. But it has become fahionable to export this notion of 'rights' to a far broader domain, which includes such things as the 'right to a living wage', and the 'right to medical care'. Those are very different from the right to human dignity that we know from the original domain. In particular, they lead to expectations that 'society' is obliged to protect individuals from adversity. It is easy to recognize that extrapolation as logically untenable: it would lead to demands for eternal life, and for reward unrelated to performance. Goal-based - teleological - ethics: Consequentialist ethics rests on evaluation of the outcome of actions. Its most common form is known as utilitarianism. This model can serve quite well as a framework for many practical situations, but it can be shown to fail when pushed logically. For example, we may observe that hacking is intolerable. It might be possible to reduce the appeal of hacking by 'sending a strong message', maybe by publicly dismembering 'a hacker'. Would it matter whether or not the victim actually was a malicious hacker? He would never tell .... So we could just as well pick up a victim at random and do him in. All this would be consistent with the utilitarian model, but is clearly outrageous. The utilitarian model may be useful because we tend to apply it as a secondary filter, after we have dismissed the options that would fail under more fundamental arguments we find so obvious that we leave them unsaid. I submit that the notion of 'fairness' - the golden rule - probably serves to eliminate the unpalatable options before we apply utilitarian criteria. The fairness model, for which Wilson argues extensively, directs us to a form of utilitarianism in which social benefit is justified as a form of investment that will eventually reward us with a better environment. I believe the fairness model is also at the core of Ayn Rand's objectivism. Wilson makes a revealing observation: he relates our apparent failure to discover a universal bedrock to the possibility that we may be seeking it in the wrong place. The nature of 'universal bedrock' will have to be such that everybody will take it for granted. Fairness is precisely such a self-evident notion. Wilson illustrates this by describing a testable form of the 'prisoner's dilemma', which requires three actors: An umpire has control over a quantity of money, but is not otherwise a participant. Two players A and B are bound by two simple rules. Player A must propose a division of a pile of money between players A and B. The pile is placed on the table by the umpire. Player B may accept or reject the division proposed by A, but cannot modify it. If player B accepts, the proposed division is executed; if player B rejects the proposed division, all money reverts to the umpire. What about relativism? Richard Feynman, among others, takes the relativists to task for egregious misunderstanding of Einstein. I don't know to what extent Einstein's theories have influenced philosophers, but one can find at least two arguments to dismiss relativism as untenable. The first is logical: 'it is ALL relative' contains the absolute 'all', contradicting the model it tries to define. The second is that the model would contradict all we can see about the society around us, and that it conflicts with the core of all major religions known today. Finally, it would effectively forbid the making of judgment. We should know better: we pick friends, spouses, presidents, employees, consultants, etc. Admittedly, there is sound value in the Indian advice that one should not judge another human being without first having walked a mile in his mocassins. But it does not intend to say that judgment is 'inappropriate': it intends to say that one should judge conscientiously, and with consideration. I find it ironic that the current fashion that frowns on 'being judgmental' is rarely challenged for its internal contradiction: making judgment is on its face judged inappropriate .... Back to a more practical level: to freedom of speech, privacy, reputation, internet governance ... Does 'freedom of speech' imply that you can say or shout whatever you jolly well please? Hardly. But government, precisely because it wields power from which citizens cannot easily escape, is constrained from using that power beyond what it is granted for. Then what 'rule' defines what one can, and cannot, say? Obedience to the Unenforceable. We are free to choose, but have to concede that we cannot choose arbitrarily. That is the domain of ethics. The domain of ethics is not controlled by some 'ethics law' or 'code of conduct'. Such agreed-on rules are useful, because they can spare us much agonizing thought: the known situations can be answered by reference. But leaders are paid for dealing with the new situations. They are expected to 'think sensibly', not to 'obey slavishly'. So ethics is not about 'knowing the answers': ethics is about 'learning how to think'. Note: not 'knowing what to think.' The 'hard ethics' comes in places where the rules fail, or are absent. The absence of rules implies not only that one 'has to decide', but also that there is no place to hide when the decision is challenged later. The book for this course raises the question what would be the best authority to govern the internet, and then develops a discussion comparing 'states', 'industry', 'netizens', 'code', 'self-government'. To place such a question in context we may find it useful to look at the nature of cooperative groups. We can plausibly argue that within such groups no scheme is possible that maintains a uniform (flat) distribution of power. So, like it or not, there will arise some hierarchy of power, i.e. some form of government. The question then is not what entity can best fill the role of government, but what people can serve best in government. Jane Jacobs has made some interesting observations about that issue. She recognizes two types of social syndromes: guardian and commercial. The guardians work at maintaining a workable environment, and the commercialists work at being productive within that environment. But if people, or institutions, develop ambition to participate in both activities, that will lead to severe social instability. It makes sense: the mixing of the two positions leads to blatant conflict of interest. It may still be useful to ask from which domain one is most likely to recruit suitable people to accept the task of governance. But the gorilla in the room is called 'democracy'. Personal freedom is not free, nor can it be guaranteed. It has to be earned. I missed the first conference on Computers, Freedom and Privacy (CFP), but attended a string of them after that. The conferences were born from a major attempt to 'catch hackers'. As luck would have it, that appeared to be easy: catch the people who attend the 'hacker conference'. What struck me at the CFP conferences was the delicate dance of members of two camps, neither of which was willing to entertain the thought that members of its side might not all be perfect, or that 'the other side' might not be unequivocally evil. So the 'acronym spooks' tried to present themselves as decent folk who had only the most laudable intent for the country, and the 'free spirits' tried to present themselves as innocent babes who only want to 'be left alone to dance'. As the years progressed, the two camps did appear to get to know each other better .... There is a widely quoted statement, I believe attributed to Jefferson (but I have not been able to confirm that attribution) that 'That government is best which governs least'. Few people are aware that that is only half of the sentence, and that the important part comes afterward: 'for its citizens discipline themselves.' I expect that you will spend much time this semester discussing 'scenarios': defined situations that pose dilemmas. It may be wise to alert you to the fact that discussion of a scenario cannot be compared with reaching a decision in real life. When you get tired of the discussion you can declare the matter closed, sling on your backpack and walk out. After you have made a decision you have to live with the consequences, forever after. It may sound corny, but those who make major ethical decisions have to be prepared to stand behind those decisions with their life: they have no rule to hide behind. Discussion of scenarios is useful, not because the discussion can lead to an 'answer', but because it offers an opportunity to develop your skills at reasoning - and judging - toward an answer. Please be well aware: 'doing ethics' implies making judgment. Those who would tell you not to be judgmental should think about the contradiction in that dictate. Making judgment is not 'luxuriating in the power of it': it is the agonizing process of deciding what is right, and sometimes what is wrong. The need for judgment arises precisely where the rules end, and where they fail or conflict with each other. So how does all this fit into your ambitions for a career as computer scientist? Is not competence at writing Java script vastly more important? Ask yourself what your software is intended to do. Any nontrivial software will affect the lives of people, and much of it may have big effects on many people. Ask yourself whether you can be at peace with what your work does to all those people. Your ambitions may initially be no higher than a reasonable income and a comfortable cubicle. But those who aspire to 'be somebody' will wind up making decisions, on behalf of others. In that capacity, they act as 'professionals' in the original sense: they are 'called forth' to act on behalf of their clients. I know that the term is used loosely in other contexts. But it is useful to look at what it meant originally, and to deduce from that what it implies. Among the key characteristics of professionalism is that the professional is expected to choose on behalf of his clients, and that his choices are not bound by rules that would reduce those choices to mere obedience. In other words, the professional is expected to 'make his own law' about his actions: his work requires autonomy. But autonomy is not 'anomy': the absence of law/rule. On the contrary, it is the presence of, and obedience to, a 'higher law' that resides in the integrity of the professional. One more topic, to illustrate conflicting imperatives: We have come to value 'privacy' very highly, almost reverently. But an attempt to define what this 'privacy' is that we wish to protect soon reveals that it is a very slippery thing. Very crudely, privacy deals with control over what others are allowed to know about us, and how we might want to control what they can do with that knowledge. But it is not so simple to identify who 'owns' this sort of information, and even less simple to define who can control it, and to what extent. Moreover, the elevation of privacy must eventually clash with a very sensible and defensible ambition to 'build a reputation'. Computer scientists should have little trouble recognizing that 'reputation' shares its root with 'computer'. The Latin verb 'putare' may be translated as 'to reckon'. Computers combine pieces of information and then reckon with them. People tend to do the same, repeatedly, with the stuff they learn about those around them. A rigid enforcement of privacy would then interfere with the activity we know as reputation-building. To build a reputation you will have to make visible who you are. Of course that implies that you must first think through who you want to be. It makes little sense to 'put on an act' if your aim is to let people find out who you really are. Beyond that, you presumably want to build a good reputation. You will want people to like and respect you for who you are, and for how you think and act. But you cannot 'just tell them' what to think of you: you have to persuade them to think well of you .... If I were given the opportunity/authority to define the purpose of this course I would hope that it will serve to extend mental discipline beyond 'cogent thought' to 'conscientious thought'. Cogent thought is what you will need to do good science. Conscientious thought is what prevents a good scientist from becoming a 'mad scientist'. ========================= A handful of little books from and about Richard Feynman: Six Easy Pieces ISBN 0-201-32842-9 Six Not-So-Easy Pieces ISBN 0-201-40825-2 QED ISBN 0-691-02417-0 The Meaning of It All ISBN 0-201-36080-2 Surely You're Joking, Mr. Feynman ISBN The first three are extracted from the freshman physics course Feynman tought at CalTech in an attempt to make the course more exciting. Feynman himself wonders about its success: he notes that as the course progressed his audience contained fewer and fewer undergraduates, and more and more graduates and fellow faculty. It is quite obvious that the text is transcribed from the spoken word (which was recorded: much of it can be purchased on audio tape and maybe CD). The 'not-so-easy pieces' contains the comments by Feynman about misunderstood interpretation of Einstein's theories by some philosophers. QED is about the work that earned Feynman the Nobel prize. The fourth book contains a set of three lectures Feynman was invited to give at the university of Washington at Seattle. The book carries the subtitle "Thoughts of a Citizen-Scientist"; Feynman used the lectures to reflect on religious, political and social issues of the day (1963). The last book is autobiographical, contains little science, and lots of humor and playfulness. ======================== Fletcher Moulton: Law and Manners Atlantic Monthly, july 1924 (Hale) Richard Mitchell: The Gift Of Fire ISBN (Manhattan Public Library) James Q. Wilson: The Moral Sense ISBN 0-684-83332-8 1993 Matt Ridley: The Origins of Virtue ISBN 0-670-87449-3 1996 Charles Sykes: A Nation of Victims ISBN Ayn Rand: Atlas Shrugged Ayn Rand: The Virtue of Selfishness: anthology Ayn Rand: Essays in Objectivist Thought Jane Jacobs: Systems of Survival: A dialogue on the Moral Foundations of Commerce and Politics M. van Swaay: The Value and Protection of Privacy (reprint) | |
|