Free Online FOOD for MIND & HUNGER - DO GOOD 😊 PURIFY MIND.To live like free birds 🐦 🦢 🦅 grow fruits 🍍 🍊 🥑 🥭 🍇 🍌 🍎 🍉 🍒 🍑 🥝 vegetables 🥦 🥕 🥗 🥬 🥔 🍆 🥜 🎃 🫑 🍅🍜 🧅 🍄 🍝 🥗 🥒 🌽 🍏 🫑 🌳 🍓 🍊 🥥 🌵 🍈 🌰 🇧🇧 🫐 🍅 🍐 🫒Plants 🌱in pots 🪴 along with Meditative Mindful Swimming 🏊‍♂️ to Attain NIBBĀNA the Eternal Bliss.
Kushinara NIBBĀNA Bhumi Pagoda White Home, Puniya Bhumi Bengaluru, Prabuddha Bharat International.

January 2013
« Dec   Feb »
Filed under: General
Posted by: site admin @ 9:40 pm





United Progressive Alliance (UPA) government’s direct cash transfer
scheme here, describing it as a ploy to hoodwink voters in the 2014 Lok
Sabha polls.

“This is an old book with a new cover,” the Bahujan Samaj Party
(BSP) supremo told reporters, pointing that scholarships for the
Scheduled Castes (SC) and the Scheduled Tribes (ST) in the state
initiated under her chief ministership was already in place where
financial aid was transferred straight to the bank accounts of the

is an ‘adh kacchi’ (half-baked) scheme cooked up by the dhokebaaz
(fraud) Congress and I warn the people of the country not to fall to the
allurements,” she said.

‘Aapka paisa, aapke haath’, the credo coined by the UPA
government was nothing but a “sham and a political stunt aimed at
fooling the electorate,” she alleged.

Her remarks came a day after the central government began the New
Year by rolling out the ambitious direct cash transfer of benefits
covering seven welfare schemes in 20 districts across the country.

Mayawati asked people to keep the Congress away from power as it has never worked for the people, SC/STs and minorities.

The former chief minister condemned the Dec 16 Delhi gang-rape
and called for evolving a political consensus for tougher laws to check
crime against women. The victim died of her injuries Dec 29 in a
Singapore hospital.

The union government should also seek opinion of legal
luminaries, chief justices of all state high courts and judges of the
Supreme Court on the issue, Mayawati said.

“In such horrifying times, a tough response and stern action is required,” she said.

She said political parties should refrain from distributing
tickets to people with criminal antecedents. She expressed concern over
rising crime against women in Uttar Pradesh.

On the issue of demands of resignation of Chief Minister Akhilesh
Yadav due to apex court directive to Central Bureau of Investigation
(CBI) to probe graft charges against him, she said the answer should
come from the Samajwadi Party.

On the growing chorus of Gujarat Chief Minister Narendra Modi
being a potential prime ministerial candidate, Mayawati said BSP would
do everything possible to prevent it.

New Delhi: Seeking to project the
image that her party’s base was not limited to Scheduled Castes, Bahujan
Samaj Party chief Mayawati has said that she is willing to support a
separate bill to provide reservation in promotion to backwards in
government jobs. Favouring early implementation of Sachar Committee
recommendations, Mayawati also sought to target arch rival Samajwadi
Party by claiming that the Akhilesh Yadav government in Uttar Pradesh
has failed to implement the poll promise of 18 per cent reservation for
Muslims in government jobs.

During the just-concluded Winter Session of Parliament, SP had
raised the issue of providing reservation for OBCs its core vote base -
and Muslims in government job promotions on the lines of a bill which
gives such benefits to SCs and STs. “The present constitutional
amendment bill is necessary as the Supreme Court had quashed a
government policy to provide reservation in promotion to SC and ST
government employees.

“We support a separate bill to grant similar benefits to
backwards,” Mayawati said. She said despite being in power for nine
months now, the UP government has not implemented its poll manifesto of
providing 18 per cent reservation for Muslims in education and
government jobs.

BSP chief Mayawati for separate bill for promotion quota to backwards

that the Mulayam Singh Yadav-led Samajwadi Party was not interested in
providing benefits to the upper castes, she said her government had
lifted a ban imposed by the SP government on government jobs to the
upper castes. “We believe in ’sarva samaj’…We also favour reservation
for poor people belonging to upper castes.”

Attacking the UP Chief Minister, the former state Chief Minister
said the law and order situation in Uttar Pradesh is deteriorating under
SP rule. “These are not my observations. But, wherever I meet people
they say the law and order is bad at all levels. Forget justice, even
police does not register our cases,” she said.

UPA not serious on quota bill, alleges Mayawati

BSP chief Mayawati said she always doubted the Centre’s sincerity on the issue.Direct cash transfer scheme a sham, says Mayawati

Upset with the UPA government for
not being able to pass the SC/ST promotion quota bill in the Lok Sabha
during the Winter Session, BSP chief Mayawati said that though she has
warned the government, she isn’t planning to withdraw her support.

The Five Precepts

Dr. Sunthorn Plamintr

The purpose of Buddhist moral precepts

There are three fundamental modes of training in Buddhist practice: morality, mental culture, and wisdom. The English word morality is used to translate the Pali term sila, although the Buddhist term contains its own particular connotations. The word sila denotes a state of normalcy, a condition which is basically unqualified and unadulterated. When one practices sila, one returns to one’s own basic goodness, the original state of normalcy, unperturbed and unmodified. Killing a human being, for instance, is not basically human nature; if it were, human beings would have ceased to exist a long time ago. A person commits an act of killing because he or she is blinded by greed, rage or hatred. Such negative qualities as anger, hatred, greed, ill will, and jealousy are factors that alter people’s nature and make them into something other than their true self. To practice sila is thus to train in preserving one’s true nature, not allowing it to be modified or overpowered by negative forces.

This definition points to the objective of Buddhist morality rather than to the practice itself, but it does give us an idea of the underlying philosophy behind the training, as well as how the Buddhist moral precepts should be followed. These precepts are a means to an end, they are observed for a specific objective.

On the personal level, the observance of precepts serves as the preliminary groundwork for the cultivation of higher virtues or mental development. Sila is the most important step on the spiritual path. Without morality, right concentration cannot be attained, and without right concentration, wisdom cannot be fully perfected. Thus, morality not only enhances people’s ethical values and fulfills their noble status as human beings, but it is crucial to their efforts toward the highest religious goal of Nibbana.

On the social level, sila contributes to harmonious and peaceful coexistence among community members and consequently helps to promote social growth and development. In a society where morality prevails and members are conscious of their roles, there will be general security, mutual trust, and close cooperation, these in turn leading to greater progress and prosperity. Without morality there will be corruption and disturbance, and all members of society are adversely affected. Most of the problems that society experiences today are connected, directly or indirectly, with a lack of good morality.

Questions of morality always concern the issues of right and wrong, good and evil. For a moral life to be meaningful these issues must not remain mere theoretical principles, but translated into practice. Good must be performed, evil must be given up. It is not enough to know what is good or evil, we also need to take proper action with respect to them. We need concrete guidelines to follow, and these are provided by the Buddhist moral precepts. Even the oft-quoted Buddhist ideals of abstention from evil, implementation of what is good, and perfect mental purification can be initially actualized through a consistent practice of moral precepts. The precepts help us to live those ideals; they teach us to do the right things and to avoid the wrong.

Buddhist moral precepts provide a wholesome foundation for personal and social growth. They are practical principles for a good life and the cultivation of virtues. If we understand the objectives of sila and realize its benefits, we will see moral precepts as an integral part of life rather than as a burden that we are compelled to shoulder. Buddhist moral precepts are not commandments imposed by force; they are a course of training willingly undertaken in order to achieve a desired objective. We do not practice to please a supreme being, but for our own good and the good of society. As individuals, we need to train in morality to lead a good and noble life. On the social level, we need to help maintain peace and harmony in society and facilitate the progress of the common good. The practice of moral precepts is essential in this regard.

Distinguishing good and evil

The problems of good and evil, right and wrong, have been dealt with in the discussion on kamma. Here it may suffice to give a brief summary on the subject.

To determine whether an action is good or evil, right or wrong, Buddhist ethics takes into account three components involved in a kammic action. The first is the intention that motivates the action, the second is the effect the doer experiences consequent to the action, and the third is the effect that others experience as a result of that action. If the intention is good, rooted in positive mental qualities such as love, compassion, and wisdom, if the result to the doer is wholesome (for instance, it helps him or her to become more compassionate and unselfish), and if those to whom the action is directed also experience a positive result thereof, then that action is good, wholesome, or skillful (kusala). If, on the other hand, the action is rooted in negative mental qualities such as hatred and selfishness, if the outcome experienced by the doer is negative and unpleasant, and if the recipients of the action also experience undesirable effects from the action or become more hateful and selfish, then that action is unwholesome or unskillful (akusala).

It is quite probable that on the empirical level an action may appear to be a mixture of good and bad elements, in spite of the intention and the way it is performed. Thus, an action committed with the best of intentions may not bring the desired result for either the doer or the recipient. Sometimes an action based on negative intentions may produce seemingly positive results (as stealing can produce wealth). Due to lack of knowledge and understanding, people may confuse one set of actions with an unrelated set of results and make wrong conclusions, or simply misjudge them on account of social values and conventions. This can lead to misconceptions about the law of kamma and loss of moral consciousness. This is why precepts are necessary in the practice of moral discipline: they provide definite guidelines and help to avoid some of the confusion that empirical observation and social conventions may entail.

Buddhist moral precepts are based on the Dhamma, and they reflect such eternal values as compassion, respect, self-restraint, honesty, and wisdom. These are values that are cherished by all civilizations, and their significance is universally recognized. Moral precepts that are based on such values or directed toward their realization will always be relevant to human society, no matter to what extent it has developed. Moreover, their validity can be empirically tested on the basis of one’s own sensitivity and conscience, which are beyond factors of time and place. Killing, for instance, is objectionable when considered from the perspective of oneself being the victim of the action (although when other lives are subjected to the same act, its undesirability may not be felt as strongly). The same is true with regard to stealing, lying, and sexual misconduct. Because Buddhist moral precepts are grounded on these factors, their practicality remains intact even today, and their usefulness is beyond question.

Precepts for lay Buddhists

Observance of the five precepts constitutes the minimum moral obligation of a practicing lay Buddhist. These five precepts enjoin against killing living beings, taking what is not given (or stealing), sexual misconduct, false speech, and use of intoxicating drink or drugs.
The practice of Buddhist moral precepts deeply affects one’s personal and social life. The fact that they represent a course of training which one willingly undertakes rather than a set of commandments willfully imposed by a God or supreme being is likely to have a positive bearing upon one’s conscience and awareness. On the personal level, the precepts help one to lead a moral life and to advance further on the spiritual path. Moreover, popular Buddhism believes that the practice of morality contributes to the accumulation of merits that both support one in the present life and ensure happiness and prosperity in the next. On the social level, observing the five precepts helps to promote peaceful coexistence, mutual trust, a cooperative spirit, and general peace and harmony in society. It also helps to maintain an atmosphere which is conducive to social progress and development, as we can see from the practical implications of each precept.
The first precept admonishes against the destruction of life. This is based on the principle of goodwill and respect for the right to life of all living beings. By observing this precept one learns to cultivate loving kindness and compassion. One sees others’ suffering as one’s own and endeavors to do what one can to help alleviate their problems. Personally, one cultivates love and compassion; socially, one develops an altruistic spirit for the welfare of others.

The second precept, not to take things which are not given, signifies respect for others’ rights to possess wealth and property. Observing the second precept, one refrains from earning one’s livelihood through wrongful means, such as by stealing or cheating. This precept also implies the cultivation of generosity, which on a personal level helps to free one from attachment and selfishness, and on a social level contributes to friendly cooperation in the community.

The third precept, not to indulge in sexual misconduct, includes rape, adultery, sexual promiscuity, paraphilia, and all forms of sexual aberration. This precept teaches one to respect one’s own spouse as well as those of others, and encourages the practice of self-restraint, which is of utmost importance in spiritual training. It is also interpreted by some scholars to mean the abstention from misuse of senses and includes, by extension, non-transgression on things that are dear to others, or abstention from intentionally hurting other’s feelings.
For example, a young boy may practice this particular precept by refraining from intentionally damaging his sister’s dolls. If he does, he may be said to have committed a breach of morality. This precept is intended to instill in us a degree of self-restraint and a sense of social propriety, with particular emphasis on sexuality and sexual behavior.

The fourth precept, not to tell lies or resort to falsehood, is an important factor in social life and dealings. It concerns respect for truth. A respect for truth is a strong deterrent to inclinations or temptation to commit wrongful actions, while disregard for the same will only serve to encourage evil deeds. The Buddha has said: “There are few evil deeds that a liar is incapable of committing.” The practice of the fourth precept, therefore, helps to preserve one’s credibility, trustworthiness, and honor.

The last of the five Buddhist moral precepts enjoins against the use of intoxicants. On the personal level, abstention from intoxicants helps to maintain sobriety and a sense of responsibility. Socially, it helps to prevent accidents, such as car accidents, that can easily take place under the influence of intoxicating drink or drugs. Many crimes in society are committed under the influence of these harmful substances. The negative effects they have on spiritual practice are too obvious to require any explanation.

The five precepts

Theravada Buddhism preserves the Buddha’s teachings and conducts religious ceremonies mainly in the original Pali language. The five precepts are also recited in Pali, and their meanings are generally known to most Buddhists. In the following the original Pali text is given in italics, and the corresponding English translation is given side by side:

1. Panatipata veramani sikkhapadam samadiyami: I observe the precept of abstaining from the destruction of life.

2. Adinnadana veramani sikkhapadam samadiyami: I observe the precept of abstaining from taking that which is not given.

3. Kamesu micchacara veramani sikkhapadam samadiyami: I observe the precept of abstaining from sexual misconduct.

4. Musavada veramani sikkhapadam samadiyami: I observe the precept of abstaining from falsehood.

5. Suramerayamajjapamadatthana veramani sikkhapadam samadiyami: I observe the precept of abstaining from intoxicants that cloud the mind and cause carelessness.

The refrain “I observe the precept of abstaining from …” which begins every precept clearly shows that these are not commandments.

They are, indeed, moral codes of conduct that lay Buddhists willingly undertake out of clear understanding and conviction that they are good for both themselves and for society.

Practical application of the five precepts

Training is based on the axiomatic assumption that human beings have the potential for development. In order that this development may be realized, a concrete standard is needed by which people may train themselves. The five precepts are meant to fulfill this need.
For example, compassion is a spiritual quality that we all possess to some degree. However, without a conscious and persistent effort to develop it, this important quality may remain rudimentary and weak. By consciously practicing the first precept, we bring this compassion to a higher level of development and come a step closer to the realization of the Dhamma. In the process, our conduct becomes more refined and our mind becomes more sensitive to the problems and suffering of others. By practicing the second precept we not only purify our livelihood but train in generosity and non-attachment. The third precept has a direct connection with the training in sense restraint, which is an essential feature in higher spiritual development. In fact, enlightenment is not possible without mastery over the senses. The fourth precept deals with training in truthfulness and virtuous speech. The objective of this precept is not only the cultivation of respect for truth, but a way of life that is sincere and free from falsehood in every respect. Even the fifth precept, which enjoins against the use of intoxicants, is not merely negative, for the resultant effects that take place in the mind in terms of mental strength and moral integrity are very positive. The observance of this precept is also a natural precursor to the cultivation of mindfulness and wisdom, which are the essence of insight meditation. Each and every precept increases our awareness of how we may skillfully conduct ourselves in body and speech and helps us to see more clearly whether we are improving in this process of self-discipline.

We may summarize the five precepts in relation to the spiritual qualities that they are likely to produce and promote as follows. The first precept helps to promote goodwill, compassion, and kindness. The second can be instrumental in developing generosity, service, altruism, non-attachment, contentment, honesty, and right livelihood. The third precept helps to cultivate self-restraint, mastery over the emotions and senses, renunciation, and control of sensual desire. The fourth precept leads to the development of honesty, reliability, and moral integrity. The fifth precept helps to promote mindfulness, clarity of mind, and wisdom.

Self-reliance and responsibility are important features of the practice of Buddhist morality. Because these precepts are meant to be a course of training, it can hardly be expected that each and every practitioner will be able to follow them without committing the slightest error, any more than it can be expected of a music student not to make a single mistake in the course of his lessons. For people with certain temperaments or occupations, some precepts may appear more difficult to follow than the rest, but that should not be an obstacle to making an attempt to keep the precepts. If one is discouraged from practicing, one need simply consider that these precepts are a course of training; and training, by definition, implies imperfection and a gradual process of development.

However, for those who are new to Buddhism, it may be a good idea to begin with greater emphasis on those precepts that are easier to follow, bearing the others in mind for later development. For instance, the second and the third precepts obviously need to be practiced by virtue of necessity, for they are supported by laws and are in perfect harmony with customs and conventions in all civilized societies. There is, therefore, hardly an excuse for not practicing them. Having dealt with these two precepts in this way, the remaining three present much lighter and less daunting a task. In fact, if we understand the contents and meaning of the five precepts correctly, we may come to feel that it is more natural to observe them than not to.

Moral precepts and livelihood

It is not true to say that fishermen, farmers, or hunters cannot observe the first precept. Like people in other trades and occupations, they may not be able to observe all the precepts all the time or in all circumstances, given their family obligations and livelihood, but they can certainly practice them on special occasions, like holy days, or when they are not actually engaged in their professions. In fact, there may be more opportunities to practice than at first seems possible. We observe the precepts in accordance with our abilities, training by degrees until we are able to make the precepts part and parcel of our lives.

In the time of the Buddha there were people engaged in occupations that involved killing, such as hunters or fishermen. Farmers, too, were not free from killing, although the intention involved might not be as direct. For all of these people the precepts were there to be practiced, and some were better able to do so than others. Each person has the opportunity to practice to the best of his or her abilities until they become more mature and are spiritually ready to give up occupations or trades that involve unwholesome kamma.

One difficulty for some people is the use of alcoholic drinks: some feel discouraged from keeping the fifth precept because some of their friends drink or because they have business dealings with people who drink. Peer pressure and business objectives may be an obstacle to the observance of this precept, but this is by no means insurmountable. Most people are reasonable and do understand religious conscience. Sometimes, citing physicians’ opinions may add weight to an excuse not to drink, but it is always best to be honest. In any case, a serious Dhamma practitioner should not allow trivial things like this to prevent him or her from trying to keep the precepts. There is always an opportunity to exert oneself if one is earnest in the practice.

Moral precepts and passivity

If one carefully studies the foregoing discussion on the five precepts, one will see that, although the Pali texts are worded in the negative “… abstaining from …”, there is the positive commitment “I undertake to observe the precept …” in all of them. Negative expressions do not necessarily represent negative or passive attitudes of mind. Of course, misunderstandings may result from misinterpretations of the Buddhist moral precepts (as they arise in regard to other Pali technical terms like Nibbana, dukkha, santutthi, and anatta).

From the practical perspective Buddhist moral precepts do contain both positive and negative aspects. However, from the psychological point of view it is important for practitioners to first recognize that which is bad or wrong and which should be abstained from. Abstention from wrong or evil deeds is the most significant step toward real development in spirituality. Strangely enough, it often appears that people are so preoccupied with doing good, they forget the most important duty of refraining from evil. That is why even though one scientific accomplishment after another is being achieved, crime rates are soaring unchecked, and thinking people begin to question the benefits of those accomplishments. In religious circles, devotees passionately try to accumulate more and more merits without ever pausing to reflect whether there are things that should be cleansed from their minds. As long as this negative aspect is not attended to on a practical level, spiritual progress will not come about. On the other hand, consider a society in which people were determined not to do evil and who abstained from that which is bad and wrong; the result of such a ‘negative’ practice would indeed be most welcome. Even Nibbana is often negatively described as “the abandoning and destruction of desire and craving,” and “the extinction of desire, the extinction of hatred, and the extinction of delusion,” although it is positively the highest good.

Once wrong and evil deeds have been abandoned, it becomes more natural to do good. Since life means movement and action, any human expression which rejects evil is bound to be good and positive. If false speech is given up, whatever is spoken will naturally be truthful. Giving up of falsehood, which is a negative act, therefore constitutes in itself not only a negation, but a positive attitude and commitment. As the Buddha himself has admonished his followers:
“Abandoning false speech, one speaks the truth, becomes dependable, trustworthy, and reliable, and does not mislead the world. Abandoning malicious speech, one does not repeat there what has been heard here, nor does one repeat here what has been heard there, in order to sow the seeds of discord. One reconciles and unites those disunited and promotes closer bonds among friends. Unity is one’s delight and joy, unity is one’s love, it is the motive behind one’s verbal expression. Abandoning harsh speech, one employs a speech which is blameless, pleasant, acceptable, heart-touching, civilized, and agreeable. Abandoning frivolous speech, one uses speech which is appropriate to the occasion, correct, purposeful, and in accordance with the Dhamma-Vinaya. One utters words that are worthy, opportune, reasonable, meaningful, and straightforward.”

One important reason why the Buddhist moral precepts are phrased in negative terms is because the negative mode of expression tends to convey clearer and more specific injunctions which can be followed with ease. From a practical point of view, “Do not kill” carries stronger impact and a clearer definition than “Be kind to animals” and can be more conveniently practiced. From experience, however, we will see that anyone who consciously and constantly observes the first precept will naturally develop kindness toward people and animals. The second precept, which says, “Do not take what is not given,” covers all forms of wrong livelihood, whether by deception, fraud, bribery or theft. By earnestly observing this precept, one will naturally take a positive step in earning one’s livelihood in a righteous way. Through constant awareness and direct control of greed and avarice, which motivate wrong livelihood, one learns to develop generosity, altruism, and selfless service. These and other positive virtues result from the so-called negative actions of observing the moral precepts, clearly demonstrating how the precepts laid down by the Buddha can bear positive results, despite their wording and expression.

Moral dilemmas

The first of the five Buddhist moral precepts is based on the altruistic concept of universal love and compassion. It is not only a way of life and an exercise in personal morality, but also a part of the much larger scheme in spiritual discipline of which purity of body, speech, and mind are indispensable ingredients. As such it makes no exception in its practice, given the lofty ideal to which it is designed to lead. However, in real life situations, we may need a more practical attitude of mind to approach the problem in a more realistic manner.
First of all, we must recognize the fact that destruction of life is a negative act and the volition involved is an unwholesome one. By being honest with ourselves and by impartially contemplating the results that such acts bring, we can realize the wisdom of the first precept and consequently try to abstain from killing in any form. Perfection in the practice comes with spiritual maturity, and until perfection is attained, one needs to be aware of possible imperfections in the practice and try to improve oneself accordingly.

Because perfection in morality requires considerable effort and training, few can achieve it in the beginning. One need not, therefore, feel discouraged, but should learn how progress in the practice can be made through a systematized and graduated process of training. For instance, one may begin by resolving to abandon any killing that is not absolutely necessary. There are people who find pleasure in destroying other creatures, such as those who fish or hunt for sport. This type of killing is quite unnecessary and only demonstrates callousness. Others are engaged in sports which involve pain and suffering to animals and may even cost their lives, such as bull fights, cock fights, and fish fights — all senseless practices designed to satisfy sadistic impulses. One who wishes to train in the Dhamma should avoid having anything to do with this kind of entertainment. One may also resolve to show kindness to other people and animals in an objective and concrete way whenever it is possible to do so. While circumstances may prevent absolute abstention from killing, this may help to refine the mind and develop more sensitivity to the suffering of other beings. Trying to look for an alternative livelihood that does not involve destruction of life is a further step to be considered.

Keeping one’s home free of pests or bugs by not creating conditions for their infestation helps reduce the necessity for exterminating them. Ecologically, this is a very commendable practice, since the adverse effects of chemical insecticides on the environment are well known. Prevention is, indeed, better than cure even concerning bugs and beetles. Cleanliness of habitat makes killing in such cases unnecessary. Even in the field of agriculture, insecticide-free farming is becoming increasingly popular and commercially competitive. If people are so inclined and compassion prevails, killing can be greatly avoided even in the real life situations of an ordinary householder with full family obligations and concerns.

In the unlikely event that killing is absolutely inevitable, it may be advisable to note the obvious distinction between killing out of cruelty and killing out of necessity. A person who goes out fishing for pleasure is cruel. While he may love children or make big donations for charitable institutions, as far as spirituality is concerned his mind is not refined enough to be sensitive to the pain and suffering of the poor creatures living in the river. A man who hunts for a living does so because it is necessary to maintain himself and his family. It would seem quite understandable that in the latter case the unwholesome effects would likely be much lighter than the former. The same thing is true in the case of killing for self defense. Killing dangerous animals, vermin, and insects accrues less kammically unwholesome consequences than killing a human being or an animal that serves man (such as a horse, a dog, or an elephant).

Buddhism, capital punishment and war

As a student of Buddhism, one may realize that each person practices Dhamma according to his or her ability and the opportunities that arise. A policeman on duty patrolling a crime-infested street or a soldier at a border outpost surveying suspicious movements inside hostile territory will experience totally different circumstances in spiritual endeavor from a monk sitting peacefully in his cloistered cell. Yet, what they do have in common is the opportunity to perform their duty. Each must therefore understand how the Dhamma can be best practiced, given the situation he is in. All of us are bounded up with certain duties, one way or another. Where policemen and soldiers are concerned, it would be naive to deny that their duties do include the possibility of killing.
t cannot be overemphasized, however, that destruction of life is, from a Buddhist standpoint, never justified. But in discussing the issue under question it is hardly appropriate not to distinguish between spiritual objectives and those of national security and administration. Capital punishment, for instance, is an instrument by which law and order may be effectively maintained for the common good of society, although Buddhism would not advocate that such a measure is conducive to the police officers’ spiritual well-being. The principles and purposes on which the police and military institutions were established are as far apart from those on which Buddhist spiritual training was formulated as anything can be. Yet, Buddhism and those secular institutions do coexist now, as they did during the time of the Buddha. Important military chiefs and dignitaries are known to have been the Buddha’s most devout followers. One does not, therefore, make the mistake of concluding that a person cannot be a Buddhist, or keep the Buddhist moral precepts for that matter, if he serves in the armed forces or police establishment. As has been said before there are more opportunities to practice the precepts than not to practice; this is true even where the above-mentioned professions are concerned.

Stealing from the rich to feed the poor

Helping the poor is a commendable effort, but stealing from the rich to fulfill that commitment can hardly be justified. If this were made into a standard practice, society would be in turmoil. Rights of possession would be ignored, and stealing would become the accepted norm. Finally, the practice would defeat itself, and thievery would be recognized as a charitable act. This is hardly a desirable state of affairs; it is something not even remotely resembling a moral condition.
One of the distinct features of the Buddhist moral precepts is the universal character in which they may be practiced with benefit by all members of society. For instance, non-stealing (second precept) can be universally observed with desirable results, and the practice will help to promote coexistence, peace, and harmony in society. If this precept were reversed and stealing were made a moral principle, we can immediately see that there would be so much conflict and confusion that society would eventually cease to function. Thus, stealing can never be made a moral act, no matter how ideal and noble the motivation.

Extramarital sex

This is a rather complex issue involving ramifications in emotional, social, and moral fields. The problem is a cause for concern in modern times, especially in the West where materialism has for so long been the philosophy of life.

The third moral precept advises against all forms of sexual misconduct, which include rape, adultery, promiscuity, paraphilia, and sexual perversions. Actually, the Buddhist commentary emphasizes adultery more than anything else, but if we take into account the purpose and intention of the precept, it is clear that the precept is intended to cover all improper behavior with regard to sex. The broadest interpretation even purports to mean abstention from the misuse of the senses. The expression “misuse of the senses” is somewhat vague. It could refer to any morally unwholesome action committed under the influence of sensual desire or to the inability to control one’s own senses. In any case there is no doubt that the third precept aims at promoting, among other things, proper sexual behavior and a sense of social decency in a human civilization where monogamy is commonly practiced and self-restraint is a cherished moral value.
For one reason or another, many young people in love are not able to enter into married life as early as they wish. While marriage is still some distance in the future, or even an uncertain quantity, these people enter into relationships, of which sex forms a significant part. This happens not only among adults, who must legally answer to their own conduct, but also among teenagers who are still immature, emotionally unstable, and tend to act in irresponsible ways. Peer pressure and altered moral values are an important contributing factor to the escalation of the problem. The trend toward extramarital sex has become so common that it is now virtually taken for granted. Contubernal arrangements are becoming increasingly popular, and marriage is relegated to a place of insignificance, jeopardizing in the process the sanctity of family life.

In the context of these developments, the third precept becomes all the more relevant and meaningful. Unlike killing, which certain circumstances seem to warrant, there is hardly any plausible excuse for sexual promiscuity, except human weaknesses and inability to restrain the sexual urge. However, there is a distinction between sexual promiscuity and sexual relationship based on mutual trust and commitment, even if the latter were a relationship between two single adults. Thus one may begin to practice the third precept by resolving not to be involved in sexual activities without an earnest intention and serious commitment of both parties. This means that sex should not be consummated merely for the sake of sexuality, but should be performed with full understanding within the people involved and with mutual responsibility for its consequences. A certain level of maturity and emotional stability is necessary to ensure a healthy and productive sexual relationship between two partners. With the realization that there is a better and more noble path to follow than promiscuity, one may see the wisdom of self-restraint and the benefit of establishing a more lasting and meaningful relationship which, rather than impeding one’s spiritual progress, may enhance it.

Finally, if anything else fails to convince people of the danger and undesirability of sexual promiscuity, perhaps the phenomenal AIDS epidemic will. This may seem beside the point, since moral precepts and moral integrity are matters that concern inner strength, fortitude, and conscientious practice, not fear and trepidation based on extraneous factors. It is, nevertheless, worthwhile to consider the connection between promiscuous behavior and the AIDS epidemic and realize how strict observance of the third Buddhist moral precept could greatly reduce the risk of infection or spread of this deadly disease. Acceptance of this fact may also lead to an appreciation of the value of morality and moral precepts as laid down by the Buddha, consequently strengthening conviction in the Dhamma practice.
White lies

The practice of the fourth precept aims at inculcating a respect for truth in the mind, implying both one’s own obligations as well as the rights of other people to truth. This is one of the most important components in developing sound social relationships, and it makes all documents, contracts, agreements, deeds, and business dealings meaningful. When we resort to falsehood, we not only become dishonest but also show disrespect to the truth. People who tell lies discredit themselves and become untrustworthy.
It is true that sometimes telling lies may prove more profitable than truth, especially from the material point of view. Because such gains are unwholesome and may cause harm in the long run, and because material profits are likely to lead to more falsehood and fabrication, it is imperative that the practice of the fourth precept be duly emphasized. Where a person’s reputation and feelings are concerned, discretion should be exercised. Of course, there are instances where silence is more appropriate than speech, and one may choose this as an alternative to prevarication and falsehood.

Motivation is an important element in determining if one is transgressing the fourth precept and whether a given verbal expression constitutes a kammically unwholesome act. For instance, when an event is fictionalized for literary purposes, this may not be regarded as falsehood as such for the intention of the work is obvious and there is no attempt at falsification involved. Another example is the case of an invective, where an abusive expression is used (such as angrily calling someone a dog). This is a case of vituperation rather than fabrication or falsification, although it is, nonetheless, a kammically unwholesome act. Also, there is a clear distinction between expressing untruth with a selfish intention and with a well-meaning motive, as when a concocted story is told for instructional purposes or a white lie is told in order to keep an innocent child out of danger.

These latter two instances are even accepted as illustrations of the employment of skillful means. A story is told of a mother who returns home to find her house on fire. Her little son is playing in the house, unaware that its burning roof could collapse at any moment. He is so engrossed that he pays no attention to his mother, who is now in great distress, being unable to get into the house herself. So she calls out to her child, “Come quickly, my little one, I have some wonderful toys for you. All the toys you ever wanted to have are here!” In this instance the mother is using a skillful means that eventually saves the boy’s life. Under certain circumstances, this may be the only alternative, but indiscriminate use of such means may lead to undesirable results. One needs to be judicious, therefore, in the practice of the precepts.

Sometimes speaking the truth may cause more harm than good, especially if it is done with malicious intent. A vindictive neighbor who spreads the scandals about the family next door may be speaking the truth, but she is neither doing anyone a service, nor is she practicing the Dhamma. A spy who sells his nation’s sensitive classified information to an enemy may be speaking the truth, but he could cause much harm to his nation’s security and jeopardize many innocent lives. The Buddha says, therefore, that one should speak the truth which is useful and conducive to the Dhamma, and should avoid that which is useless and is likely to cause unwholesome kamma to oneself and others.


The fifth precept covers all intoxicants, including narcotics, that alter the state of consciousness and are physiologically addictive. The danger and negative effects of narcotics, such as cocaine and heroin, are too well known to need any further elaboration. Today they represent a serious health and social problem around the world.

Drinking intoxicants is not part of the Buddhist culture, although it seems to have become a widespread phenomenon in modern society. It is true that alcoholic consumption was prevalent before and during the time of the Buddha, but he never approved of the practice. The fact that something is commonly practiced does not necessarily mean that it is good and wholesome. Those who advocate drinking as a factor for promoting friendship forget to take account of the reality that so many friendships have been drowned in those intoxicants. The brawls, strife and unruly behavior that often follow the consumption of alcoholic beverages represent an unequivocal testimony of the ignoble state to which human beings can be reduced to under the influence of intoxicants. Friendship founded on compassion and mutual understanding is much more desirable than that which is based on alcohol. Social drinking may produce a general euphoric atmosphere among drinkers (and probably a nuisance for nondrinkers), but it is never a necessary condition for interpersonal relationship. Often, people use this as an excuse to get drunk. The high rate of car accidents connected with drunk driving should serve as a strong reminder of the danger and undesirability of alcoholic consumption. On the other hand, it may be mentioned in passing that liquor does contain certain medicinal properties and can be used for medical purposes. Such use, if genuine and under qualified supervision, does not entail transgression of the fifth precept and is not considered a morally unwholesome act.

The most obvious danger of intoxicants is the fact that they tend to distort the sensibilities and deprive people of their self-control and powers of judgment. Under alcoholic influences, a person is likely to act rashly and without due consideration or forethought. Otherwise decent people may even commit murder or rape under the influence of alcohol, or cause all kinds of damage (such as fire, accident, and vandalism) to people or property. The Buddha described addiction to intoxicants as one of the six causes of ruin. It brings about six main disadvantages: loss of wealth, quarrels and strife, a poor state of health (liability to diseases), a source of disgrace, shameless and indecent behavior, and weakened intelligence and mental faculties.
Other precepts

Occasionally, lay Buddhists may take the opportunity to observe the eight precepts as a means of developing higher virtues and self-control. Of course, these can be practiced as often as one wishes, but the special occasions on which they are normally observed are the holy days, especially the more important ones, the three month period of rains retreat, and special events connected with one’s life. Sometimes, a Buddhist may observe them even as a token of gratitude and respect to a deceased relative or on the occasion of a birth anniversary of a monk he reveres. Four of these eight precepts are identical with the five precepts mentioned above. In order, they are as follows:

1. to abstain from the destruction of life 
2. to abstain from stealing or taking what is not given 
3. to abstain from sexual intercourse (to practice celibacy) 
4. to abstain from falsehood 
5. to abstain from alcoholic drinks 
6. to abstain from partaking of food from afternoon till the following daybreak 
7. to abstain from singing and entertainments, from decorating oneself and use of perfumes 
8. to abstain from the use of large and luxurious beds.

[Originally published in Sunthorn Plamintr’s Getting to Know Buddhism (Bangkok: Buddhadhamma Foundation, 1994), pp. 133-154.]

Morality and religion

From Wikipedia, the free encyclopedia

Jump to: navigation, search

Morality and religion is the relationship between religious views and morals. Religions have value frameworks that guide adherents in determining between right and wrong. These include the Triple Jems of Jainism, Judaism’s Halacha, Islam’s Sharia, Catholicism’s Canon Law, Buddhism’s Eightfold Path, and Zoroastrianism’s “good thoughts, good words, and good deeds” concept, among others.[1] These frameworks are outlined and interpreted by various sources such as holy books, oral and written traditions, and religious leaders. Many of these share tenets with secular value frameworks such as consequentialism, freethought, and utilitarianism.

Religion and morality are not synonymous. According to The Westminster Dictionary of Christian Ethics, religion and morality “are to be defined differently and have no definitional connections with each other. Conceptually and in principle, morality and a religious value system are two distinct kinds of value systems or action guides.”[2]

Value judgments can vary greatly between religions, past and present. Monotheistic religions such as Christianity, Islam, and Judaism typically derive ideas of right and wrong by the rules and laws set forth in their respective holy books and by their religious leaders. Polytheistic religions such as Buddhism and Hinduism generally draw from a broader canon of work.[3] There has been interest in the relationship between religion and crime and other behavior that does not adhere to contemporary laws and social norms in various countries. Studies conducted in recent years have explored these relationships, but the results have been mixed and sometimes contradictory.[4] The ability of religious faiths to provide value frameworks that are seen as useful is a debated matter. Religious commentators have asserted that a moral life cannot be led without an absolute lawgiver as a guide. Other observers assert that moral behaviour does not rely on religious tenets, and secular commentators point to ethical challenges within various religions that conflict with contemporary social norms.

    •    1 Relationship between religion and morality
    •    2 Religion and societal mores
    •    3 Religious frameworks
    •    4 Religion and crime
    •    5 Criticism of religious values
    •    6 Morality without religion
    •    7 See also
    •    8 Notes
    •    9 References

Relationship between religion and morality

Within the wide range of ethical traditions, religious traditions co-exist with secular value frameworks such as humanism, utilitarianism, and others. There are many types of religious values. Modern monotheistic religions, such as Islam, Judaism, Christianity, and to a certain degree others such as Sikhism, define right and wrong by the laws and rules set forth by their respective gods and as interpreted by religious leaders within the respective faith. Polytheistic religious traditions tend to be less absolute. For example, within Buddhism, the intention of the individual and the circumstances should be accounted for to determine if an action is right or wrong.[5] A further disparity between the morals of religious traditions is pointed out by Barbara Stoler Miller, who states that, in Hinduism, “practically, right and wrong are decided according to the categories of social rank, kinship, and stages of life. For modern Westerners, who have been raised on ideals of universality and egalitarianism, this relativity of values and obligations is the aspect of Hinduism most difficult to understand”.[6]
The Westminster Dictionary of Christian Ethics says that,

For many religious people, morality and religion are the same or inseparable; for them either morality is part of religion or their religion is their morality. For others, especially for nonreligious people, morality and religion are distinct and separable; religion may be immoral or nonmoral, and morality may or should be nonreligous. Even for some religious people the two are different and separable; they may hold that religion should be moral and morality should be, but they agree that they may not be.[7]

Richard Paula and Linda Elder of the Foundation for Critical Thinking assert that “most people confuse ethics with behaving in accordance with social conventions, religious beliefs, and the law”. They separate the concept of ethics from these topics, stating that
The proper role of ethical reasoning is to highlight acts of two kinds: those which enhance the well-being of others—that warrant our praise—and those that harm or diminish the well-being of others—and thus warrant our criticism.[8]

They note problems that could arise if religions defined ethics, such as (1) religious practices like “torturing unbelievers or burning them alive” potentially being labeled “ethical”, and (2) the lack of a common religious baseline across humanity because religions provide different theological definitions for the idea of “sin”.[9] They further note that various documents, such as the UN Declaration of Human Rights lay out “transcultural” and “trans-religious” ethical concepts and principles such as slavery, genocide, torture, sexism, racism, murder, assault, fraud, deceit, and intimidation which require no reliance on religion (or social convention) for us to understand they are “ethically wrong”.[10]

Religion and societal mores

According to Gregory S. Paul, theists assert that societal belief in a creator god “is instrumental towards providing the moral, ethical and other foundations necessary for a healthy, cohesive society.”[11] Yet, empirical evidence indicates the opposite.[12] High rates of religiosity are correlated with “higher rates of homicide, juvenile and early adult mortality, STD infection rates, teen pregnancy, and abortion in the prosperous democracies.”[13] Paul concludes that

The non-religious, proevolution democracies contradict the dictum that a society cannot enjoy good conditions unless most citizens ardently believe in a moral creator. The widely held fear that a Godless citizenry must experience societal disaster is therefore refuted.[14]

Religious frameworks

Religions provide different ways of dealing with moral dilemmas. For example, there is no absolute prohibition on killing in Hinduism, which recognizes that it “may be inevitable and indeed necessary” in certain circumstances.[15] In monotheistic traditions, certain acts are viewed in more absolute terms, such as abortion or divorce. In the latter case, a 2008 study by the Barna Group found that those within religious traditions have a higher divorce rate than those in non-religious demographic groups (atheists and agnostics). However some religious groups had even lower divorce rates and the agnostic/atheist group had by far the lowest number of married couples to begin with.[16] Religion is not always positively associated with morality. Philosopher David Hume stated that, “the greatest crimes have been found, in many instances, to be compatible with a superstitious piety and devotion; Hence it is justly regarded as unsafe to draw any inference in favor of a man’s morals, from the fervor or strictness of his religious exercises, even though he himself believe them sincere.”[17]

Religion and crime

The overall relationship between faith and crime is unclear. A 2001 review of studies on this topic found “The existing evidence surrounding the effect of religion on crime is varied, contested, and inconclusive, and currently no persuasive answer exists as to the empirical relationship between religion and crime.”[18] Dozens of studies have been conducted on this topic since the twentieth century. A 2005 study by Gregory S. Paul published in the Journal of Religion and Society argues for a positive correlation between the degree of public religiosity in a society and certain measures of dysfunction,[19] an analysis published later in the same journal contends that a number of methodological problems undermine any findings or conclusions to be taken from the research.[20] In another response, Gary Jensen builds on and refines Paul’s study.[21] His conclusion is that a “complex relationship” exists between religiosity and homicide “with some dimensions of religiosity encouraging homicide and other dimensions discouraging it”.

Some works indicate that lower levels of religiosity in a society may be correlated with lower crime rates—especially violent crime. Phil Zuckerman’s 2008 book, Society without God, notes that Denmark and Sweden, “which are probably the least religious countries in the world, and possibly in the history of the world”, enjoy “among the lowest violent crime rates in the world [and] the lowest levels of corruption in the world”.[22][a] The 2005 Paul study stated that, “In general, higher rates of belief in and worship of a creator correlate with higher rates of homicide, juvenile and early adult mortality, STD infection rates, teen pregnancy, and abortion in the prosperous democracies,” and “In all secular developing democracies a centuries long-term trend has seen homicide rates drop to historical lows” with the exceptions being the United States (with a high religiosity level) and “theistic” Portugal.[23][b] On April 26, 2012, the results of a study which tested their subjects’ pro-social sentiments were published in the Social Psychological and Personality Science journal in which non-religious people had higher scores showing that they were more inclined to show generosity in random acts of kindness, such as lending their possessions and offering a seat on a crowded bus or train. Religious people also had lower scores when it came to seeing how much compassion motivated participants to be charitable in other ways, such as in giving money or food to a homeless person and to non-believers.[24][25]

Other studies seem to show positive links in the relationship between religiosity and moral behavior[26][27][28]—for example, surveys suggesting a positive connection between faith and altruism.[29] Modern research in criminology also acknowledges an inverse relationship between religion and crime,[30] with some studies establishing this connection.[31] A meta-analysis of 60 studies on religion and crime concluded, “religious behaviors and beliefs exert a moderate deterrent effect on individuals’ criminal behavior”.[32] However, in his books about the materialism in Americas Evangelical Churches Ron Sider accuses fellow Christians of failing to do better than their secular counterparts in the percentage adhering to widely held moral standards (e.g., lying, theft and sexual infidelity).[33]
Criticism of religious values

Religious values can diverge from commonly-held contemporary moral positions, such as those on murder, mass atrocities, and slavery. For example, Simon Blackburn states that “apologists for Hinduism defend or explain away its involvement with the caste system, and apologists for Islam defend or explain away its harsh penal code or its attitude to women and infidels”.[34] In regard to Christianity, he states that the “Bible can be read as giving us a carte blanche for harsh attitudes to children, the mentally handicapped, animals, the environment, the divorced, unbelievers, people with various sexual habits, and elderly women”.[35] He provides examples such as the phrase in Exodus 22:18 that has “helped to burn alive tens or hundreds of thousands of women in Europe and America”: “Thou shalt not suffer a witch to live,” and notes that the Old Testament God apparently has “no problems with a slave-owning society”, considers birth control a crime punishable by death, and “is keen on child abuse”.[36] Blackburn notes morally suspect themes in the Bible’s New Testament as well.[37]

Bertrand Russell stated that, “there are also, in most religions, specific ethical tenets which do definite harm. The Catholic condemnation of birth control, if it could prevail, would make the mitigation of poverty and the abolition of war impossible. The Hindu beliefs that the cow is a sacred animal and that it is wicked for widows to remarry cause quite needless suffering.”[38] He asserts that
You find this curious fact, that the more intense has been the religion of any period and the more profound has been the dogmatic belief, the greater has been the cruelty and the worse has been the state of affairs….You find as you look around the world that every single bit of progress in humane feeling, every improvement in the criminal law, every step toward the dimunition of war, every step toward better treatment of the colored races, or every mitigation of slavery, every moral progress that there has been in the world, has been consistently opposed by the organized churches of the world.[39]

According to Paul Copan, Jewish laws in the bible show an evolution of moral standards towards protecting the vulnerable, imposing a death penalty on those pursuing forced slavery and identifying slaves as persons and not property.[40]
According to Bertrand Russell, “Clergymen almost necessarily fail in two ways as teachers of morals. They condemn acts which do no harm and they condone acts which do great harm.”[41] He cites an example of a clergyman who was warned by a physician that his wife would die if she had another (her tenth) child, but impregnated her regardless which resulted in her death. “No one condemned him; he retained his benefice and married again. So long as clergymen continue to condone cruelty and condemn “innocent” pleasure, they can only do harm as guardians of the morals of the young.”[42]
Russell further states that, “The sense of sin which dominates many children and young people and often lasts on into later life is a misery and a source of distortion that serves no useful purpose of any sort or kind.”[43] Russel allows that religious sentiments have, historically, sometimes led to morally acceptable behavior, but asserts that, “in the present day, [1954] such good as might be done by imputing a theological origin to morals is inextricably bound up with such grave evils that the good becomes insignificant in comparison.”[44]

Morality without religion

Main article: Morality without religion

All the world’s major religions, with their emphasis on love, compassion, patience, tolerance, and forgiveness can and do promote inner values. But the reality of the world today is that grounding ethics in religion is no longer adequate. This is why I am increasingly convinced that the time has come to find a way of thinking about spirituality and ethics beyond religion altogether.[45]

The 14th Dalai Lama, Tenzin Gyatso, 10 September 2012

The 14th Dalai Lama, Tenzin Gyatso in 2007

There are number of secular value frameworks, such as consequentialism, freethought, humanism, and utilitarianism. Yet, there have been opposing views about the ability of both religious and secular moral frameworks to provide useful guides to right and wrong actions.
According to Thomas Dixon, “Many today … argue that religious beliefs are necessary to provide moral guidance and standards of virtuous conduct in an otherwise corrupt, materialistic, and degenerate world.”[46] In the same vein, Christian theologian Ron Rhodes has remarked that “it is impossible to distinguish evil from good unless one has an infinite reference point which is absolutely good.”[47] Thomas Dixon states, “Religions certainly do provide a framework within which people can learn the difference between right and wrong.”[46]

Various non-religious commentators have supported the ability of secular value frameworks to provide useful guides. Bernard Williams argued that, “Either one’s motives for following the moral word of God are moral motives, or they are not. If they are, then one is already equipped with moral motivations, and the introduction of God adds nothing extra. But if they are not moral motives, then they will be motives of such a kind that they cannot appropriately motivate morality at all … we reach the conclusion that any appeal to God in this connection either adds to nothing at all, or it adds the wrong sort of thing.”[48] Other observers criticize religious morals as incompatible with modern social norms. For example, popular atheist Richard Dawkins, writing in The God Delusion, has stated that religious people have committed a wide variety of acts and held certain beliefs through history that are considered today to be morally repugnant. In accordance with Godwins Law he has stated that Adolf Hitler and the Nazis held broadly Christian religious beliefs that inspired the Holocaust on account of antisemitic Christian doctrine, that Christians have traditionally imposed unfair restrictions on the legal and civil rights of women, and that Christians have condoned slavery of some form or description throughout most of Christianity’s history.[citation needed] According to Paul Copan, the position of the Bible to slaves is a positive one for the slaves in that Jewish laws imposed a death penalty on those pursuing slavery and treated slaves as persons, not property.[40]
See also
    •    Ethics in religion
    •    Morality without religion
a.^ Zuckerman identifies that Scandinavians have “relatively high rates of petty crime and burglary”, but “their overall rates of violent crime—such as murder, aggravated assault, and rape—are among the lowest on earth” (Zuckerman 2008, pp. 5–6).
b.^ The authors also state that “A few hundred years ago rates of homicide were astronomical in Christian Europe and the American colonies,”[49] and “[t]he least theistic secular developing democracies such as Japan, France, and Scandinavia have been most successful in these regards.”[50] They argue for a positive correlation between the degree of public religiosity in a society and certain measures of dysfunction,[19] an analysis published later in the same journal argues that a number of methodological problems undermine any findings or conclusions in the research.[20]
    1.    ^ Esptein, Greg M. (2010). Good Without God: What a Billion Nonreligious People Do Believe. New York: HarperCollins. p. 117. ISBN 978-0-06-167011-4.
    2.    ^ Childress, (ed) James F.; Macquarrie, (ed) John (1986). The Westminster Dictionary of Christian Ethics. Philadelphia: The Westminster Press. p. 401. ISBN 0-664-20940-8.
    3.    ^ Bodhippriya Subhadra Siriwardena, ‘The Buddhist perspective of lay morality’, 1996
    4.    ^ Edgar Saint George, “Religion’s Effects On Crime Rates”
    5.    ^ Peggy Morgan, “Buddhism.” In Morgan, Peggy; Lawton, Clive A., eds. (2007). Ethical Issues in Six Religious Traditions (Second ed.). Columbia University Press. pp. 61, 88–89. ISBN 978-0-7486-2330-3.
    6.    ^ Miller, Barbara Stoler (2004). The Bhagavad Gita: Krishna’s Counsel in Time of War. New York: Random House. p. 3. ISBN 0-553-21365-2.
    7.    ^ Childress, (ed) James F.; Macquarrie, (ed) John (1986). The Westminster Dictionary of Christian Ethics. Philadelphia: The Westminster Press. p. 400. ISBN 0-664-20940-8.
    8.    ^ Paul, Richard; Elder, Linda (2006). The Miniature Guide to Understanding the Foundations of Ethical Reasoning. United States: Foundation for Critical Thinking Free Press. pp. np. ISBN 0-944-583-17-2.
    9.    ^ Paul, Richard; Elder, Linda (2006). The Miniature Guide to Understanding the Foundations of Ethical Reasoning. United States: Foundation for Critical Thinking Free Press. pp. np. ISBN 0-944-583-17-2.
    10.    ^ Paul, Richard; Elder, Linda (2006). The Miniature Guide to Understanding the Foundations of Ethical Reasoning. United States: Foundation for Critical Thinking Free Press. pp. np. ISBN 0-944-583-17-2.
    11.    ^ Paul, Gregory S. (2005). “Cross-National Correlations of Quantifiable Societal Health with Popular Religiosity and Secularism in the Prosperous Democracies”. Journal of Religion & Society 7: 1. Retrieved 8 October 2012.
    12.    ^ Paul, Gregory S. (2005). “Cross-National Correlations of Quantifiable Societal Health with Popular Religiosity and Secularism in the Prosperous Democracies”. Journal of Religion & Society 7: 7–8. Retrieved 8 October 2012. According to Paul, “Data correlations show that in almost all regards the highly secular democracies consistently enjoy low rates of societal dysfunction, while pro-religious and antievolution America performs poorly.”
    13.    ^ Paul, Gregory S. (2005). “Cross-National Correlations of Quantifiable Societal Health with Popular Religiosity and Secularism in the Prosperous Democracies”. Journal of Religion & Society 7: 1. Retrieved 8 October 2012.
    14.    ^ Paul, Gregory S. (2005). “Cross-National Correlations of Quantifiable Societal Health with Popular Religiosity and Secularism in the Prosperous Democracies”. Journal of Religion & Society 7: 7–8. Retrieved 8 October 2012.
    15.    ^ Werner Menski, “Hinduism.” In Morgan, Peggy; Lawton, Clive A., eds. (2007). Ethical Issues in Six Religious Traditions (Second ed.). Columbia University Press. p. 5. ISBN 978-0-7486-2330-3.
    16.    ^ Barna Group (31 March 2008). “New Marriage and Divorce Statistics Released”. Barna Group. Retrieved 19 November 2011.
    17.    ^ David Hume, “The Natural History of Religion.” In Hitchens, Christopher (2007). The Portable Atheist: Essential Readings for the Nonbeliever. Philadelphia: Da Capo Press. p. 30. ISBN 978-0-306-81608-6.
    18.    ^ Baier, Colin J.; Wright, Bradley R. E. (February 2001). “If You Love Me, Keep My Commandments”: A Meta-analysis of the Effect of Religion on Crime. 38. No. 1. Journal of Research in Crime and Delinquency. p. 3. Retrieved 20 November 2011. Original in italics.
    19.    ^ a b Paul, Gregory S. (2005). “Cross-National Correlations of Quantifiable Societal Health with Popular Religiosity and Secularism in the Prosperous Democracies: A First Look”. Journal of Religion and Society (Baltimore, Maryland) 7.
    20.    ^ a b Gerson Moreno-Riaño; Mark Caleb Smith, Thomas Mach (2006). “Religiosity, Secularism, and Social Health”. Journal of Religion and Society (Cedarville University) 8.
    21.    ^ Gary F. Jensen (2006) Department of Sociology, Vanderbilt University Religious Cosmologies and Homicide Rates among Nations: A Closer Look Journal of Religion and Society, Volume 8, ISSN 1522-5658
    22.    ^ Zuckerman, Phil. Society Without God: What the Least Religious Nations Can Tell Us about Contentment. New York: New York University Press. p. 2. ISBN 978-0-8147-9714-3. Zuckerman’s work is based on his studies conducted during a 14-month period in Scandinavia in 2005–2006.
    23.    ^ Paul, Gregory S. (2005). “Cross-National Correlations of Quantifiable Societal Health with Popular Religiosity and Secularism in the Prosperous Democracies: A First Look”. Journal of Religion and Society (Baltimore, Maryland) 7: 4, 5, 8, and 10.
    24.    ^ Highly Religious People Are Less Motivated by Compassion Than Are Non-Believers by Science Daily
    25.    ^ Laura R. Saslow, Robb Willer, Matthew Feinberg, Paul K. Piff, Katharine Clark, Dacher Keltner and Sarina R. Saturn My Brother’s Keeper? Compassion Predicts Generosity More Among Less Religious Individuals
    26.    ^ KERLEY, KENT R., MATTHEWS, TODD L. & BLANCHARD, TROY C. (2005) Religiosity, Religious Participation, and Negative Prison Behaviors. Journal for the Scientific Study of Religion 44 (4), 443–457. doi:10.1111/j.1468-5906.2005.00296.x
    27.    ^ SAROGLOU, VASSILIS, PICHON, ISABELLE, TROMPETTE, LAURENCE, VERSCHUEREN, MARIJKE & DERNELLE, REBECCA (2005) Prosocial Behavior and Religion: New Evidence Based on Projective Measures and Peer Ratings. Journal for the Scientific Study of Religion 44 (3), 323–348. doi:10.1111/j.1468-5906.2005.00289.x
    28.    ^ Regnerus, Mark D. & Burdette, Amy (2006) RELIGIOUS CHANGE AND ADOLESCENT FAMILY DYNAMICS. The Sociological Quarterly 47 (1), 175–194. doi:10.1111/j.1533-8525.2006.00042.x
    29.    ^ e.g. a survey by Robert Putnam showing that membership of religious groups was positively correlated with membership of voluntary organisations
    30.    ^ As is stated in: Doris C. Chu (2007). Religiosity and Desistance From Drug Use. Criminal Justice and Behavior, 2007; 34; 661 originally published online Mar 7, 2007; doi:10.1177/0093854806293485
    31.    ^ For example:
    ◦    Albrecht, S. I., Chadwick, B. A., & Alcorn, D. S. (1977). Religiosity and deviance:Application of an attitude-behavior contingent consistency model. Journal for the Scientific Study of Religion, 16, 263–274.
    ◦    Burkett, S.,& White, M. (1974). Hellfire and delinquency:Another look. Journal for the Scientific Study of Religion,13,455–462.
    ◦    Chard-Wierschem, D. (1998). In pursuit of the “true” relationship: A longitudinal study of the effects of religiosity on delinquency and substance abuse. Ann Arbor, MI: UMI Dissertation.
    ◦    Cochran, J. K.,& Akers, R. L. (1989). Beyond Hellfire:An explanation of the variable effects of religiosity on adolescent marijuana and alcohol use. Journal of Research in Crime and Delinquency, 26, 198–225.
    ◦    Evans, T. D.,Cullen, F. T.,Burton, V. S.,Jr.,Dunaway, R. G.,Payne, G. L.,& Kethineni, S. R. (1996). Religion, social bonds, and delinquency. Deviant Behavior, 17, 43–70.
    ◦    Grasmick, H. G., Bursik, R. J., & Cochran, J. K. (1991). “Render unto Caesar what is Caesar’s”: Religiosity and taxpayer’s inclinations to cheat. The Sociological Quarterly, 32, 251–266.
    ◦    Higgins, P. C., & Albrecht, G. L. (1977). Hellfire and delinquency revisited. Social Forces, 55, 952–958.
    ◦    Johnson, B. R.,Larson, D. B.,DeLi,S.,& Jang, S. J. (2000). Escaping from the crime of inner cities:Church attendance and religious salience among disadvantaged youth. Justice Quarterly, 17, 377–391.
    ◦    Johnson, R. E., Marcos, A. C., & Bahr, S. J. (1987). The role of peers in the complex etiology of adolescent drug use. Criminology, 25, 323–340.
    ◦    Powell, K. (1997). Correlates of violent and nonviolent behavior among vulnerable inner-city youths. Family and Community Health, 20, 38–47.
    32.    ^ Baier, C. J.,& Wright, B. R. (2001). “If you love me, keep my commandments”:A meta-analysis of the effect of religion on crime. Journal of Research in Crime and Delinquency,38,3–21.
    33.    ^ See, for instance, Ronald J. Sider, The Scandal of the Evangelical Conscience: Why Are Christians Living Just Like the Rest of the World? (Grand Rapids, Mich.: Baker, 2005). Sider quotes extensively from polling research by The Barna Group showing that moral behavior of evangelical Christians is unexemplary.
    34.    ^ Blackburn, Simon (2001). Ethics: A Very Short Introduction. Oxford: Oxford University Press. p. 13. ISBN 978-0-19-280442-6.
    35.    ^ Blackburn, Simon (2001). Ethics: A Very Short Introduction. Oxford: Oxford University Press. p. 12. ISBN 978-0-19-280442-6.
    36.    ^ Blackburn, Simon (2001). Ethics: A Very Short Introduction. Oxford: Oxford University Press. pp. 10, 12. ISBN 978-0-19-280442-6.
    37.    ^ Blackburn, Simon (2001). Ethics: A Very Short Introduction. Oxford: Oxford University Press. pp. 11–12. ISBN 978-0-19-280442-6.
    38.    ^ Russell, Bertrand (1957). Why I Am Not a Christian: And Other Essays on Religion and Related Subjects. New York: George Allen & Unwin Ltd.. p. vii. ISBN 978-0-671-20323-8.
    39.    ^ Russell, Bertrand (1957). Why I Am Not a Christian: And Other Essays on Religion and Related Subjects. New York: George Allen & Unwin Ltd.. pp. 20–21. ISBN 978-0-671-20323-8.
    40.    ^ a b Copan, Paul. “Does the Old Testament Endorse Slavery? An Overview”. Retrieved 5 July 2012.
    41.    ^ Russell, Bertrand (1957). Why I Am Not a Christian: And Other Essays on Religion and Related Subjects. New York: George Allen & Unwin Ltd.. p. 68. ISBN 978-0-671-20323-8.
    42.    ^ Russell, Bertrand (1957). Why I Am Not a Christian: And Other Essays on Religion and Related Subjects. New York: George Allen & Unwin Ltd.. pp. 68–69. ISBN 978-0-671-20323-8.
    43.    ^ Russell, Bertrand (1957). Why I Am Not a Christian: And Other Essays on Religion and Related Subjects. New York: George Allen & Unwin Ltd.. p. 166. ISBN 978-0-671-20323-8.
    44.    ^ Russell, Bertrand (1957). Why I Am Not a Christian: And Other Essays on Religion and Related Subjects. New York: George Allen & Unwin Ltd.. p. 195. ISBN 978-0-671-20323-8.
    45.    ^ Dalai Lama (10 September 2012). “Dalai Lama”. Facebook. Facebook. Retrieved 10 September 2012.
    46.    ^ a b Dixon, Thomas (2008). Science and Religion: A Very Short Introduction. Oxford: Oxford University Press. pp. 115. ISBN 978-0-19-929551-7.
    47.    ^ Ron Rhodes. “Strategies for Dialoguing with Atheists”. Reasoning from the Scriptures Ministries. Retrieved January 4, 2010.
    48.    ^ Williams, Bernard (1972). Morality. Cambridge: Cambridge University Press. pp. 64–65. ISBN 0-521-45729-7.
    49.    ^ Paul, Gregory S. (2005). “Cross-National Correlations of Quantifiable Societal Health with Popular Religiosity and Secularism in the Prosperous Democracies: A First Look”. Journal of Religion and Society (Baltimore, Maryland) 7: 4, 5, 8.
    50.    ^ Paul, Gregory S. (2005). “Cross-National Correlations of Quantifiable Societal Health with Popular Religiosity and Secularism in the Prosperous D

Information Technology and Moral Values

First published Tue Jun 12, 2012

Information technology is now ubiquitous in the lives of people across the globe. These technologies take many forms such as personal computers, smart phones, the internet, web and mobile phone applications, digital assistants, and cloud computing. In fact the list is growing constantly and new forms of these technologies are working their way into every aspect of daily life. In some cases, such as can be seen in massive multiplayer online games (see section 2.1.1 below), these technologies are even opening up new ways of interacting with each other. Information technology at its basic level is technology that records, communicates, synthesizes or organizes information. Information can be understood as any useful data, instructions, or meaningful message content. The word literally means to “give form to” or to shape one’s thoughts. So a basic type of information technology might be the proverbial string tied around one’s finger to remind or inform you that you have some specific task to accomplish today. Here the string stands in for a more complex proposition such as “buy groceries before you come home.” The string itself is not the information, it merely symbolizes the information and therefore this symbol must be correctly interpreted for it to be useful. Which raises the question, what is information itself?

Unfortunately there is not a completely satisfying and philosophically rigorous definition available, though there are at least two very good starting points. For those troubled by the ontological questions regarding information, we might want to simply focus on the symbols and define information as any meaningfully ordered set of symbols. This move can be very useful and mathematicians and engineers prefer to focus on this aspect of information, which is called “syntax” and leave the meaningfulness of information or its “semantics” for others to figure out. Claude E. Shannon working at Bell Labs produced a landmark mathematical theory of communication (1948), where he took his experiences in cryptography and telephone technologies and worked out a mathematical formulation describing how syntactical information can be turned into a signal that is transmitted in such a way as to mitigate noise or other extraneous signals which can then be decoded by the desired receiver of the message (Shannon 1948; Shannon and Weaver 1949). The concepts described by Shannon, along with additional important innovations made by others who are too many to list, explain the way that information technology works, but we still have the deeper issue to resolve if we want to thoroughly trace the impact of information technologies on moral values.

The second starting point is a bit more deeply philosophical in nature. Here we begin with the claim that information either constitutes or is closely correlated with what constitutes our existence and the existence of everything around us. This means that information plays an ontological role in the manner in which the universe operates. A standpoint such as this would place information at the center of concern for philosophy and this idea has given rise to the new fields of Information Philosophy and Information Ethics. Philosophy of Information will not be addressed in detail here but the interested reader can begin with Floridi (2010b, 2011b) for an introduction. Some of the most important aspects of Information Ethics will be outlined in more detail below.

Every action we take leaves a trail of information that could be recorded and stored for future use. For instance, you might use the simple technology of keeping a detailed diary listing all the things you did and thought during the day. But today you could augment that with even more detail gathered with advanced information technologies some examples include; all of your economic transactions, a GPS generated plot of where you traveled, a list of all the web addresses you visited and the details of each search you initiated online, a listing of all your vital signs such as blood pressure and heart rate, all of your dietary intakes for the day, and many other examples can be imagined. As you go through this thought experiment you begin to see the complex trail of data that you generate each and every day and how that same data might be collected and stored though the use of information technologies. Here we can begin to see how information technology can impact moral values. As this data gathering becomes more automated and ever-present, we must ask who is in control of this data, what is to be done with it, and who will insure its accuracy. For instance, which bits of information should be made public, which held private, and which should be allowed to become the property of third parties like corporations? Questions of the production, access and control of information will be at the heart of moral challenges surrounding the use of information technology.

One might argue that this situation is no different from the moral issues revolving around the production, access and control of any basic necessity of life. But there is one major difference, if one party controls the access of some natural resource, then that by necessity excludes others from using it. This is not necessarily so with digital information, it is non-exclusory, meaning we can all at least theoretically possess the same digital information because copying it from one digital source to another does not require eliminating the previous copy. Since there is no physical obstacle to the spread of all information, then there remain only appeals to morality, or economic justice, which might prevent distributing certain forms of information. Therefore, understanding the role of moral values in information technology is indispensable to the design and use of these technologies (Johnson 1985; Moor 1985; Nissenbaum 1998; Spinello 2001). It should be noted that this entry will not directly address the phenomenological approach to the ethics of information technology since there is a detailed entry on this subject available (see entry on Phenomenological Approaches to Ethics and Information Technology).
    •    1. The Moral Challenges of Information Technology
    ◦    1.1 The Fundamental Character of Information Technologies
    ▪    1.1.1 Moral Values in Information Recording
    ▪    1.1.2 Moral Values in Communicating and Accessing Information
    ▪    1.1.3 Moral Values in Organizing and Synthesizing Information
    ◦    1.2 The Moral Paradox of Information Technologies
    •    2. Specific Moral Challenges at the Cultural Level
    ◦    2.1 Social Media and Networking
    ▪    2.1.1 Online Games and Worlds
    ▪    2.1.2 The Lure of the Virtual Game Worlds
    ◦    2.3 Malware, Spyware and Informational Warfare
    ◦    2.4 Future Concerns
    ▪    2.4.1 Acceleration of Change
    ▪    2.4.2 Artificial Intelligence and Artificial Life
    ▪    2.4.3 Robotics and Moral Values
    •    3. Information Technologies of Morality
    ◦    3.1 Information Technology as a Model for Moral Discovery
    ◦    3.2 Information Technology as a Moral System
    ◦    3.4 Informational Organisms as Moral Agents
    •    Bibliography
    •    Academic Tools
    •    Other Internet Resources
    •    Related Entries

1. The Moral Challenges of Information Technology

The move from one set of dominant information technologies to another is always morally contentious. Socrates lived during the long transition from a largely oral tradition to a newer information technology consisting of writing down words and information and collecting those writings into scrolls and books. Famously Socrates was somewhat antagonistic to writing and he never wrote anything down himself. Ironically, we only know about Socrates’ argument against writing because his student Plato ignored his teacher and wrote it down in a dialogue called “Phaedrus” (Plato). Towards the end of this dialogue Socrates discusses with his friend Phaedrus the “…conditions which make it (writing) proper or improper” (section 274b–479c). Socrates tells a fable of an Egyptian God he names Theuth who gives the gift of writing to a king named Thamus. Thamus is not pleased with the gift and replies,

If men learn this, it will implant forgetfulness in their souls; they will cease to exercise memory because they rely on that which is written, calling things to remembrance no longer from within themselves, but by means of external marks. (Phaedrus, section 275a)
Socrates, who was adept at quoting lines from poems and epics and placing them into his conversations, fears that those who rely on writing will never be able to truly understand and live by these words. For Socrates there is something immoral or false about writing. Books can provide information but they cannot, by themselves, give you the wisdom you need to use or deeply understand that information. Conversely, in an oral tradition you do not simply consult a library, you are the library, you are a living manifestation of the information you know by heart. For Socrates, reading a book is nowhere near as insightful as talking with its author. Written words,
…seem to talk to you as though they were intelligent, but if you ask them anything about what they say, from a desire to be instructed, they go on telling you the same thing forever. (Phaedrus, section 275d).

His criticism of writing at first glance may seem humorous but the temptation to use recall and call it memory is getting more and more prevalent in modern information technologies. Why learn anything when information is just an Internet search away? In order to avoid Socrates’ worry, information technologies should do more than just provide access to information; they should also help foster wisdom and understanding as well.

1.1 The Fundamental Character of Information Technologies

Early in the information technology revolution Richard Mason suggested that the coming changes in information technologies would necessitate rethinking the social contract (Mason 1986). What he could not have known then was how often we would have to update the social contract as these technologies rapidly change. Information technologies change quickly and move in and out of fashion at a bewildering pace. This makes it difficult to try to list them all and catalog the moral impacts of each. The very fact that this change is so rapid and momentous has caused some to argue that we need to deeply question the ethics of the process of developing emerging technologies (Moor 2008). It has also been argued that the ever morphing nature of information technology is changing our ability to even fully understand moral values as they change. Lorenzo Magnani claims that acquiring knowledge of how that change confounds our ability to reason morally “…has become a duty in our technological world” (Magnani 2007, 93). The legal theorist Larry Lessig warns that the pace of change in information technology is so rapid that it leaves the slow and deliberative process of law and political policy behind and in effect these technologies become lawless, or extralegal. This is due to the fact that by the time a law is written to curtail, for instance, some form of copyright infringement facilitated by a particular file sharing technology, that technology has become out of date and users are on to something else that facilitates copyright infringement (Lessig 1999). But even given this rapid pace of change it remains the case that information technologies or applications can all be categorized into at least three different types each of which we will look at below.
All information technologies record (store), transmit (communicate), organize and/or synthesize information. For example, a book is a record of information, a telephone is used to communicate information, and the Dewey decimal system organizes information. Many information technologies can accomplish more than one of the above functions and, most notably, the computer can accomplish all of them since it can be described as a universal machine (see the entry on computability and complexity), so it can be programmed to emulate any form of information technology. In section 2 we will look at some specific example technologies and applications from each of the three types of information technology listed above and track the moral challenges that arise out of the use and design of these specific technologies. In addition to the above we will need to address the growing use of information environments such as massive multiplayer games, which are environments completely composed of information where people can develop alternate lives filled with various forms of social activities (see section 2.3). Finally we will look at not only how information technology impacts our moral intuitions but also how it might be changing the very nature of moral reasoning. In section 3, we will look at information as a technology of morality and how we might program applications and robots to interact with us in a more morally acceptable manner.

1.1.1 Moral Values in Information Recording

We live in a world rich in data and the technology to record and store vast amounts of this data has grown rapidly. The primary moral concern here is that when we collect, store, and/or access information it is done in a just manner that anyone can see is fair and in the best interests of all parties involved. As was mentioned above, each of us produces a vast amount of information every day that could be recorded and stored as useful data to be accessed later when needed. But moral conundrums arise when that collection, storage and use of our information is done by third parties without our knowledge or done with only our tacit consent. The control of information is power. The social institutions that have traditionally exercised this power are things like, religious organizations, universities, libraries, healthcare officials, government agencies, banks and corporations. These entities have access to stored information that gives them a certain amount of power over their customers and constituencies. Today each citizen has access to more and more of that stored information without the necessity of utilizing the traditional mediators of that information and therefore a greater individual share of social power (see Lessig 1999).

One of the great values of modern information technology is that it makes the recording of information easy, and in some cases, it is done automatically. Today, a growing number of people enter biometric data such as blood pressure, calorie intake, exercise patterns, etc. into applications designed to help them achieve a healthier lifestyle. This type of data collection could become more automated in the near future. There are already applications that use the GPS tracking available in many phones to track the length and duration of a user’s walk or run. How long until a smartphone collects a running data stream of your blood pressure throughout the day perhaps tagged with geo-location markers of particularly high or low readings? In one sense this could be immensely powerful data that could lead to much healthier lifestyle choices. But it could also be a serious breach in privacy if the information got into the wrong hands which would be easily accomplished since third parties have access to information collected on smartphones and online applications. In the next section (1.1.2) we will look at some theories on how best to ethically communicate this recorded information to preserve privacy. But here we must address a more subtle privacy breach, the collection and recording of data about a user without his or her knowledge or consent. When searching on the Internet, browser software records all manner of data about our visits to various websites which can, for example, make webpages load faster next time you visit them. Even the websites themselves use various means to record information when your computer has accessed them and they may leave bits of information on your computer which the site can use the next time you visit. Some websites are able to detect which other sites you have visited or which pages on the website you spend the most time on. If someone were following you around a library noting down this kind of information you might find it uncomfortable or hostile, but online this kind of behavior takes place behind the scenes and is barely noticed by the casual user.

According to some professionals, information technology has all but eliminated the private sphere. Scott McNealy of Sun Microsystems famously announced in 1999: “You have zero privacy anyway. Get over it” (Sprenger, 1999). Helen Nissenbaum observes that,
[w]here previously, physical barriers and inconvenience might have discouraged all but the most tenacious from ferreting out information, technology makes this available at the click of a button or for a few dollars (Nissenbaum 1997)
and since the time when she wrote this the gathering of data has become more automated and cheaper. Clearly, earlier theories of privacy that assumed the inviolability of physical walls no longer apply but as Nissenbaum argues, personal autonomy and intimacy require us to protect privacy nonetheless (Nissenbaum 1997).

A final concern in this section is that information technologies are now storing user data in “the cloud” meaning that the data is stored on a device remotely located from the user and not owned or operated by that user, but the data is then available from anywhere the user happens to be on any device he or she happens to be using. This ease of access has the result of also making the relationship one has to one’s own data more tenuous because of the uncertainty about the physical location of that data. Since personal data is crucially important to protect, the third parties that offer “cloud” services need to understand the responsibility of the trust the user is placing in them. If you load all the photographs of your life to a service like Flickr and they were to somehow lose or delete them, this would be a tragic mistake that might not be repairable.

1.1.2 Moral Values in Communicating and Accessing Information

Information technology has forced us to rethink a simple notion of privacy into more complex theories that recognize both the benefits and risks of communicating all manner of information. The primary moral values of concern are privacy, ownership, trust and the veracity of the information being communicated.

Who has the final say whether or not some information about a user is communicated or not? Who is allowed to sell your medical records, your financial records, your friend list, your browser history, etc.? If you do not have control over this process, then how can you claim a right to privacy? For instance Alan Westin argued in the very early decades of digital information technology that control of access to one’s personal information was the key to maintaining privacy (Westin 1967). It follows that if we care about privacy, then we should give all the control of access to personal information to the individual. Most corporate entities resist this notion as information about users has become a primary commodity in the digital world boosting the fortunes of corporations like Google or Facebook. There is a great deal of utility each of us gains from the services of internet search companies. It might actually be a fair exchange that they provide search results for free based on collecting data from individual user behavior that helps them rank the results. This service comes with advertising that is directed at the user based on his or her search history. That is, each user tacitly agrees to give up some privacy whenever they use the service. If we follow the argument raised above that privacy is equivalent to information control then we do seem to be ceding our privacy away little by little. Herman Tavani and James Moor (2004) argue that in some cases giving the user more control of their information may actually result in greater loss of privacy. Their primary argument is that no one can actually control all of the information about oneself that is produced each day. If we focus only on the little bit we can control, we lose site of the vast mountains of data we cannot (Tavani and Moor 2004). Tavani and Moor argue that privacy must be recognized by the third parties that do control your information and only if those parties have a commitment to protecting user privacy will we actually have any real privacy and towards this end they suggest that we think in terms of restricted access to information rather than strict control of personal information (Tavani and Moor 2004).
Information security is also an important moral value that impacts the communication and access of user information. If we grant the control of our information to third parties in exchange for the services they provide, then these entities must also be responsible for restricting the access to that information by others who might use it to harm us (see Epstein 2007; Magnani 2007; Tavani 2007). With enough information, a person’s entire identity might be stolen and used to facilitate fraud and larceny. The victims of these crimes can have their lives ruined as they try to rebuild such things as their credit rating and bank accounts. This has led to the design of computer systems that are more difficult to access and the growth of a new industry dedicated to securing computer systems.

The difficulty in obtaining complete digital security rests in the fact that security is antithetical to the moral values of sharing and openness that guided many of the early builders of information technology. Steven Levy (1984) describes in his book, “Hackers: Heroes of the Computer Revolution,” a kind of “Hacker ethic,” that includes the idea that computers should be freely accessible and decentralized in order to facilitate “world improvement” and further social justice (Levy 1984; see also Markoff 2005). So it seems that information technology has a strong dissonance created in the competing values of security and openness based on the competing values of the people designing the technologies themselves.

This conflict in values has been debated by philosophers. While many of the hackers interviewed by Levy argue that hacking is not as dangerous as it seems and that it is mostly about gaining knowledge of how systems work, Eugene Spafford counters that no computer break-in is entirely harmless and that the harm precludes the possibility of ethical hacking except in the most extreme cases (Spafford 2007). Kenneth Himma largely agrees that hacking is largely unethical but that politically motivated hacking or “Hacktivism” may have some moral justification though he is hesitant to give his complete endorsement of the practice due to the largely anonymous nature of the speech entailed by the hacktivist protests (Himma 2007b). Mark Manion and Abby Goodrum agree that hacktivism could be a special case of ethical hacking but warn that it should proceed in accordance to the moral norms set by the acts of civil disobedience that marked the twentieth century or risk being classified as online terrorism (Manion and Goodrum 2007).

A very similar value split plays out in other areas as well, particularly in intellectual property rights (see entry on Intellectual Property/) and pornography and censorship (see entry on Pornography and Censorship). What information technology adds to these long standing moral debates is the nearly effortless access to information that others might want to control such as intellectual property, dangerous information and pornography (Floridi 1999), along with the anonymity of both the user and those providing access to the information (Nissenbaum 1999; Sullins 2010). For example, even though cases of bullying and stalking occur regularly, the anonymous and remote actions of cyber-bullying and cyberstalking make these behaviors much easier and the perpetrator less likely to be caught. Arguably, this makes these unethical behaviors on cyberspace more likely that the design of cyberspace itself tacitly promotes unethical behavior (Adams 2002; Grodzinsky and Tavani 2002). Since the very design capabilities of information technology influence the lives of their users, the moral commitments of the designers of these technologies may dictate the course society will take and our commitments to certain moral values (Brey 2010; Bynum 2000; Ess 2009; Johnson 1985; Magnani 2007; Moor 1985; Spinello 2001; Sullins 2010).

Assuming we are justified in granting access to some store of information that we may be in control of, there is a duty to ensure that that information is useful and accurate. If you use a number of different search engines to try to find some bit of information, each of these searches will vary from one another. This shows that not all searches are equal and it matters which search provider you use. All searches are filtered to some degree in order to ensure that the information the search provider believes is most important to the user is listed first. A great deal of trust is placed in this filtering process and the actual formulas used by search providers are closely held trade secrets. The hope is that these decisions are morally justifiable but it is difficult to know. If we are told a link will take us to one location on the web yet when we click it we are taken to some other place, the user may feel that this is a breach of trust. This is often called “clickjacking” and malicious software can clickjack a browser by taking the user to some other site than is expected; it will usually be rife with other links that will further infect your machine or sites that pay the clickjacker for bringing traffic to them (Hansen and Grossman, 2008). Again the anonymity and ease of use that information technology provides can facilitate deceitful practices. Pettit (2009) suggests that this should cause us to reevaluate the role that moral values such as trust and reliance play in a world of information technology.

Lastly in this section we must address the impact that the access to information has on social justice. Information technology was largely developed in the Western industrial societies during the twentieth century. But even today the benefits of this technology have not spread evenly around the world and to all socioeconomic demographics. Certain societies and social classes have little to no access to the information easily available to those in more well off and in developed nations, and some of those who have some access have that access heavily censored by their own governments. This situation has come to be called the “digital divide,” and despite efforts to address this gap it may be growing wider. While much of this gap is driven by economics (see Warschauer 2003), Charles Ess notes that there is also a problem with the forces of a new kind of cyber enabled colonialism and ethnocentrism that can limit the desire of those outside the industrial West to participate in this new “Global Metropolis” (Ess 2009). John Weckert also notes that cultural differences in giving and taking offence play a role in the design of more egalitarian information technologies (Weckert 2007). Others argue that basic moral concerns like privacy are weighed differently in Asian cultures (Hongladarom 2008; Lü 2005).

1.1.3 Moral Values in Organizing and Synthesizing Information

In addition to storing and communicating information, many information technologies automate the organizing of information as well as synthesizing or mechanically authoring or acting on new information. Norbert Wiener first developed a theory of automated information synthesis which he called Cybernetics (Wiener 1961 [1948]). Wiener realized that a machine could be designed to gather information about the world, derive logical conclusions about that information which would imply certain actions, which the machine could then implement, all without any direct input form a human agent. Wiener quickly saw that if his vision of cybernetics was realized, there would be tremendous moral concerns raised by such machines which he outlined in his book the Human Use of Human Beings (Wiener 1950). Wiener argued that, while this sort of technology could have drastic moral impacts, it was still possible to be proactive and guide the
technology in ways that would increase the moral reasoning capabilities of both humans and machines (Bynum 2008).

Machines make decisions that have moral impacts. Wendell Wallach and Colin Allen tell an anecdote in their book “Moral Machines” (2008). One of the authors left on a vacation and when he arrived overseas his credit card stopped working, perplexed, he called the bank and learned that an automatic anti-theft program had decided that there was a high probability that the charges he was trying to make were from someone stealing his card and that in order to protect him the machine had denied his credit card transactions. Here we have a situation where a piece of information technology was making decisions about the probability of nefarious activity happening that resulted in a small amount of harm to the person that it was trying to help. Increasingly, machines make important life changing financial decisions about people without much oversight from human agents. Whether or not you will be given a credit card, mortgage loan, the price you will have to pay for insurance, etc. is very often determined by a machine. For instance if you apply for a credit card the machine will look for certain data points, like your salary, your credit record, the economic condition of the area you’re in, etc., and then calculates a probability that you will default on your credit card, that probability will either pass a threshold of acceptance or not and determine whether or not you are given the card. The machine can typically learn as well to make better judgments given the results of earlier decisions it has made. Machine learning and prediction is based on complex logic and mathematics (see for example Russell and Norvig 2010), this complexity may result in slightly humorous examples of mistaken prediction as told above, or it might interpret the data of someone’s friends and acquaintances, his or her recent purchases, and other social data which might result in the mistaken classification of that person as a potential terrorist, thus altering that person’s life in a powerfully negative way (Sullins 2010). It all depends on the design of the learning and prediction algorithm, something that is typically kept secret.

1.2 The Moral Paradox of Information Technologies

Several of the issues raised above result from the moral paradox of Information technologies. Many users want information to be quickly accessible and easy to use and desire that it should come at as low a cost as possible, preferably free. But users also want important and sensitive information to be secure, stable and reliable. Maximizing our value of quick and low cost minimizes our ability to provide secure and high quality information and the reverse is true also. Thus the designers of information technologies are constantly faced with making uncomfortable compromises. The early web pioneer Stewart Brand sums this up well in his famous quote:
In fall 1984, at the first Hackers’ Conference, I said in one discussion session: “On the one hand information wants to be expensive, because it’s so valuable. The right information in the right place just changes your life. On the other hand, information wants to be free, because the cost of getting it out is getting lower and lower all the time. So you have these two fighting against each other” (Clarke 2000—see Other

Internet Resources)[1]

Since these competing moral values are essentially impossible to reconcile, they are likely to continue to be at the heart of moral debates in the use and design of information technologies for the foreseeable future.

2. Specific Moral Challenges at the Cultural Level

In the section above, the focus was on the moral impacts of information technologies on the individual user. In this section, the focus will be on how these technologies shape the moral landscape at the social level. At the turn of the century the term “web 2.0” began to surface and it referred to the new way that the world wide web was being used as a medium for information sharing and collaboration as well as a change in the mindset of web designers to include more interoperability and user-centered experiences on their websites. This term has also become associated with “social media” and “social networking.” While the original design of the web by its creator Tim Berners-Lee was always one that included notions of meeting others and collaboration, users were finally ready to fully exploit those capabilities by 2004 when the first Web 2.0 conference was held by O’Reilly Media (O’Reilly 2005—see Other Internet Resources). This change has meant that a growing number of people have begun to spend significant portions of their lives online with other users experiencing an unprecedentedly new kind of lifestyle. Social networking is an important part of many people’s lives now where massive numbers of people congregate on sites like Facebook and interact with friends old and new, real and virtual. The Internet offers the immersive experience of interacting with others in virtual worlds where environments constructed from information. Just now emerging onto the scene are technologies that will allow us to merge the real and the virtual. This new “augmented reality” is facilitated by the fact that many people now carry GPS enabled smart phones and other portable computers with them upon which they can run applications that let them interact with their surroundings and their computers at the same time, perhaps looking at an item though the camera in their device and the “app” calling up information about that entity and displaying it in a bubble above the item. Each of these technologies comes with their own suite of new moral challenges some of which will be discussed below.

2.1 Social Media and Networking

Social networking is a term given to sites and applications that facilitate online social interactions that typically focus on sharing information with other users referred to as “friends.” The most famous of these sites today is Facebook. There are a number of moral values that these sites call into question. Shannon Vallor (2011) has reflected on how sites like Facebook change or even challenge our notion of friendship. Her analysis is based on the Aristotelian theory of friendship (see entry on Aristotle’s Ethics). Aristotle argued that humans realize a good and true life though virtuous friendships. Valor notes that four key dimensions of Aristotle’s ‘virtuous friendship,’ namely: reciprocity, empathy, self-knowledge and the shared life, are found in online social media in ways that can actually strengthen friendship (Vallor 2011). Yet she argues that social media is not up to the task of facilitating what Aristotle calls ‘the shared life,’ and thus these media cannot fully support the Aristotelian notion of complete and virtuous friendship by themselves (Vallor 2011). Vallor also has a similar analysis of other Aristotelian virtues such as patience, honesty and empathy as they are fostered in online media (Vallor 2010). Johnny Hartz Søraker (2012) argues for a nuanced understanding of online friendship rather than a rush to normative judgement on the virtues of virtual friends.

There are, of course, privacy issues that abound in the use of social media. James Parrish following Mason (1986) recommends four policies that a user of social media should follow to ensure proper ethical concern for other’s privacy:
  1.    When sharing information on SNS (social network sites), it is not only necessary to consider the privacy of one’s personal information, but the privacy of the information of others who may be tied to the information being shared.
   2.    When sharing information on SNS, it is the responsibility of the one desiring to share information to verify the accuracy of the information before sharing it.
  3.    A user of SNS should not post information about themselves that they feel they may want to retract at some future date.
Furthermore, users of SNS should not post information that is the product of the mind of another individual unless they are given consent by that individual. In both cases, once the information is shared, it may be impossible to retract.
  4.    It is the responsibility of the SNS user to determine the authenticity of a person or program before allowing the person or program access to the shared information. (Parrish 2010)
These systems are not typically designed to protect individual privacy, but since these services are typically free there is a strong economic drive for the service providers to harvest at least some information about their user’s activities on the site in order to sell that information to advertisers for directed marketing.

2.1.1 Online Games and Worlds

The first moral impact one encounters when contemplating online games is the tendency for these games to portray violence. There are
many news stories that claim a cause and effect relationship between violence in computer games and real violence. The claim that violence in video games has a causal connection to actual violence has been strongly critiqued by the social scientist Christopher J. Ferguson (Ferguson 2007). However, Mark Coeckelbergh argues that since this relationship is tenuous at best and that the real issue at hand is the effect these games have on one’s moral character (Coeckelbergh 2007). But Coeckelbergh goes on to claim that computer games can be designed to facilitate virtues like empathetic and cosmopolitan moral development so he is not arguing against all games just those where the violence inhibits moral growth (Coeckelbergh 2007). Marcus Schulzke (2010) holds a different opinion, suggesting that the violence in computer games is morally defensible. Schulzke’s main claim is that actions in a virtual world are very different from actions in the real world, though a player may “kill” another player in a virtual world, that player is instantly back in the game and the two will almost certainly remain friends in the real world thus virtual violence is very different from real violence, a distinction gamers are comfortable with (Schulzke 2010). While virtual violence may seem palatable to some, Morgan Luck (2009) seeks a moral theory that might be able to allow the acceptance of virtual murder but that will not extend to other immoral acts such as pedophilia. Christopher Bartel (2011) is less worried about the distinction Luck attempts to draw; Bartel argues that virtual pedophilia is real child pornography, which is already morally reprehensible and illegal across the globe.

While violence is easy to see in online games, there is a much more substantial moral value at play and that is the politics of virtual worlds. Peter Ludlow and Mark Wallace describe the initial moves to online political culture in their book, The Second Life Herald: The Virtual Tabloid that Witnessed the Dawn of the Metaverse (2007). Ludlow and Wallace chronicle how the players in massive online worlds have begun to form groups and guilds that often confound the designers of the game and are at times in conflict with those that make the game. Their contention is that designers rarely realize that they are creating a space where people intended to live large portions of their lives and engage in real economic and social activity and thus the designers have the moral duties somewhat equivalent to those who may write a political constitution (Ludlow and Wallace 2007). According to Purcell (2008), there is little commitment to democracy or egalitarianism in online games and this needs to change if more and more of us are going to spend time living in these virtual worlds.

2.1.2 The Lure of the Virtual Game Worlds

A persistent concern about the use of computers and especially computer games is that this could result in anti-social behavior and isolation. Yet studies might not support these hypotheses (Gibba, et al. 1983). With the advent of massively multiplayer games as well as video games designed for families the social isolation hypothesis is even harder to believe. These games do, however, raise gender equality issues. James Ivory used online reviews of games to complete a study that shows that male characters outnumber female characters in games and those female images that are in games tend to be overly sexualized (Ivory 2006). Soukup (2007) suggests that gameplay in these virtual worlds is most often based on gameplay that is oriented to masculine styles of play thus potentially alienating women players. And those women that do participate in game play at the highest level play roles in gaming culture that are very different from those the largely heterosexual white male gamers, often leveraging their sexuality to gain acceptance (Taylor et al. 2009). Additionally, Joan M. McMahon and Ronnie Cohen have studied how gender plays a role in the making of ethical decisions in the virtual online world, with women more likely to judge a questionable act as unethical then men (2009). Marcus Johansson suggests that we may be able to mitigate virtual immorality by punishing virtual crimes with virtual penalties in order to foster more ethical virtual communities (Johansson 2009).
The media has raised moral concerns about the way that childhood has been altered by the use of information technology (see for example Jones 2011). Many applications are now designed specifically for toddlers encouraging them to interact with computers from as early an age as possible. Since children may be susceptible to media manipulation such as advertising we have to ask if this practice is morally acceptable or not. Depending on the particular application being used, it may encourage solitary play that may lead to isolation but others are more engaging with both the parents and the children playing (Siraj-Blatchford 2010). It should also be noted that pediatricians have advised that there are no known benefits to early media use amongst young children but there potential risks (Christakis 2009). Studies have shown that from 1998 to 2008, sedentary lifestyles amongst children in England have resulted in the first measured decline in strength since World War Two (Cohen et al. 2011). It is not clear if this decline is directly attributable to information technology use but it may be a contributing factor.

2.3 Malware, Spyware and Informational Warfare

Malware and computer virus threats are growing at an astonishing rate. Security industry professionals report that while certain types of malware attacks such as spam are falling out of fashion, newer types of attacks focused on mobile computing devices and the hacking of cloud computing infrastructure are on the rise outstripping any small relief seen in the slowing down of older forms of attack (Cisco Systems 2011; Kaspersky Lab 2011). What is clear is that this type of activity will be with us for the foreseeable future. In addition to the largely criminal activity of malware production, we must also consider the related but more morally ambiguous activities of hacking, hacktivism, commercial spyware, and informational warfare. Each of these topics has its own suite of subtle moral ambiguities. We will now explore some of them here.

While there may be wide agreement that the conscious spreading of malware is of questionable morality there is an interesting question as to the morality of malware protection and anti-virus software. With the rise in malicious software there has been a corresponding growth in the security industry which is now a multi-billion dollar market. Even with all the money spent on security software there seems to be no slowdown in virus production, in fact quite the opposite has occurred. This raises an interesting business ethics concern, what value are customers receiving for their money from the security industry? The massive proliferation of malware has been shown to be largely beyond the ability of anti-virus software to completely mitigate. There is an important lag in the time between when a new piece of malware is detected by the security community and the eventual release of the security patch and malware removal tools.
The anti-virus modus operandi of receiving a sample, analyzing the sample, adding detection for the sample, performing quality assurance, creating an update, and finally sending the update to their users leaves a huge window of opportunity for the adversary … even assuming that anti-virus users update regularly. (Aycock and Sullins 2010)

This lag is constantly exploited by malware producers and in this model there is an everpresent security hole that is impossible to fill. Thus it is important that security professionals do not overstate their ability to protect systems, by the time a new malicious program is discovered and patched, it has already done significant damage and there is currently no way to stop this (Aycock and Sullins 2010).
In the past most malware creation was motivated by hobbyists and amateurs, but this has changed and now much of this activity is criminal in nature (Cisco Systems 2011; Kaspersky Lab 2011). Aycock and Sullins (2010) argue that relying on a strong defense is not enough and the situation requires a counteroffensive reply as well and they propose an ethically motivated malware research and creation program. This is not an entirely new idea and it was originally suggested by the Computer Scientist George Ledin in his editorial for the Communications of the ACM, “Not Teaching Viruses and Worms is Harmful” (2005). This idea does run counter to the majority opinion regarding the ethics of learning and deploying malware. Most computer scientists and researchers in information ethics agree that all malware is unethical (Edgar 2003; Himma 2007a; Neumann 2004; Spafford 1992; Spinello 2001). According to Aycock and Sullins, these worries can be mitigated by open research into understanding how malware is created in order to better fight this threat (2010).

When malware and spyware is created by state actors, we enter the world of informational warfare and a new set of moral concerns. Every developed country in the world experiences daily cyber-attacks, with the major target being the United States that experiences a purported 1.8 billion attacks a month (Lovely 2010). The majority of these attacks seem to be just probing for weaknesses but they can devastate a countries internet such as the cyber attacks on Estonia in 2007 and those in Georgia which occured in 2008. While the Estonian and Georgian attacks were largely designed to obfuscate communication within the target countries more recently informational warfare has been used to facilitate remote sabotage. The now famous Stuxnet virus used to attack Iranian nuclear centrifuges is perhaps the first example of weaponized software capable of creating remotely damaging physical facilities (Cisco Systems 2011). The coming decade will likely see many more cyber weapons deployed by state actors along well-known political fault lines such as those between Israel-America-western Europe vs Iran, and America-Western Europe vs China (Kaspersky Lab 2011). The moral challenge here is to determine when these attacks are considered a severe enough challenge to the sovereignty of a nation to justify military reactions and to react in a justified and ethical manner to them (Arquilla 2010; Denning 2008, Kaspersky Lab 2011).

The primary moral challenge of informational warfare is determining how to use weaponized information technologies in a way that honors our commitments to just and legal warfare. Since warfare is already a morally questionable endeavor it would be preferable if information technologies could be leveraged to lessen violent combat. For instance, one might argue that the Stuxnet virus did damage that in generations before might have been accomplished by an air raid incurring significant civilian casualties—and that so far there have been no reported human casualties resulting from Stuxnet. The malware known as “Flame” seems to be designed to aid in espionage and one might argue that more accurate information given to decision makers during wartime should help them make better decisions on the battlefield. On the other hand, these new informational warfare capabilities might allow states to engage in continual low level conflict eschewing efforts for peacemaking which might require political compromise.

2.4 Future Concerns

As was mentioned in the introduction above, information technologies are in a constant state of change and innovation. The internet technologies that have brought about so much social change were scarcely imaginable just decades before they appeared. Even though we may not be able to foresee all possible future information technologies, it is important to try to imagine the changes we are likely to see in emerging technologies. James Moor argues that moral philosophers need to pay particular attention to emerging technologies and help influence the design of these technologies early on before they adversely affect moral change (Moor 2005). Some potential technological concerns now follow.

2.4.1 Acceleration of Change

An information technology has an interesting growth pattern that has been observed since the founding of the industry. Intel engineer Gordon E. Moore noticed that the number of components that could be installed on an integrated circuit doubled every year for a minimal economic cost and he thought it might continue that way for another decade or so from the time he noticed it in 1965 (Moore 1965). History has shown his predictions were rather conservative. This doubling of speed and capabilities along with a halving of cost has proven to continue every 18 or so months since 1965 and shows little evidence of stopping. And this phenomenon is not limited to computer chips but is also present in all information technologies. The potential power of this accelerating change has captured the imagination of the noted inventor Ray Kurzweil who has famously predicted that if this doubling of capabilities continues and more and more technologies become information technologies, then there will come a point in time where the change from one generation of information technology to the next will become so massive that it will change everything about what it means to be human, and at this moment which he calls “the Singularity” our technology will allow us to become a new post human species (2006). If this is correct, there could be no more profound change to our moral values. There has been some support for this thesis from the technology community with institutes such as the Singularity Institute, the Acceleration Studies Foundation, Future of Humanity Institute, and H+.[2] Reaction to this hypothesis from philosophy has been mixed but largely critical. For example Mary Midgley (1992) argues that the belief that science and technology will bring us immortality and bodily transcendence is based on pseudoscientific beliefs and a deep fear of death. In a similar vein Sullins (2000) argues that there is a quasi-religious aspect to the acceptance of transhumanism and the acceptance of the transhumanist hypothesis influences the values embedded in computer technologies that are dismissive or hostile to the human body. While many ethical systems place a primary moral value on preserving and protecting the natural, transhumanists do not see any value in defining what is natural and what is not and consider arguments to preserve some perceived natural state of the human body as an unthinking obstacle to progress. Not all philosophers are critical of transhumanism, as an example Nick Bostrom (2008) of the Future of Humanity Institute at Oxford University argues that putting aside the feasibility argument, we must conclude that there are forms of posthumanism that would lead to long and worthwhile lives and that it would be overall a very good thing for humans to become posthuman if it is at all possible.
2.4.2 Artificial Intelligence and Artificial Life

Artificial Intelligence (AI) refers to the many longstanding research projects directed at building information technologies that exhibit some or all aspects of human level intelligence and problem solving. Artificial Life (ALife) is a project that is not as old as AI and is focused on developing information technologies and or synthetic biological technologies that exhibit life functions typically found only in biological entities. A more complete description of logic and AI can be found in the entry on logic and artificial intelligence. ALife essentially sees biology as a kind of naturally occurring information technology that may be reverse engineered and synthesized in other kinds of technologies. Both AI and ALife are vast research projects that defy simple explanation. Instead the focus here is on the moral values that these technologies impact and the way some of these technologies are programmed to affect emotion and moral concern. Artificial Intelligence

Alan Turing is credited with defining the research project that would come to be known as artificial Intelligence in his seminal 1950 paper “Computing Machinery and Intelligence.” He described the “imitation game,” where a computer attempts to fool a human interlocutor that it is not a computer but another human (Turing 1948, 1950). In 1950, he made the now famous claim that
I believe that in about fifty years’ time…. one will be able to speak of machines thinking without expecting to be contradicted.
A description of the test and its implications to philosophy outside of moral values can be found here (see entry on The Turing Test). Turing’s prediction may have been overly ambitious and in fact some have argued that we are nowhere near the completion of Turing’s dream. For example, Luciano Floridi (2011a) argues that while AI has been very successful as a means of augmenting our own intelligence, but as a branch of cognitive science interested in intelligence production, AI has been a dismal disappointment.
For argument’s sake, assume Turing is correct even if he is off in his estimation of when AI will succeed in creating a machine that can converse with you. Yale professor David Gelernter worries that that there would be certain uncomfortable moral issues raised. “You would have no grounds for treating it as a being toward which you have moral duties rather than as a tool to be used as you like” (Gelernter 2007). Gelernter suggests that consciousness is a requirement for moral agency and that we may treat anything without it in any way that we want without moral regard. Sullins (2006) counters this argument by noting that consciousness is not required for moral agency. For instance, nonhuman animals and the other living and nonliving things in our environment must be accorded certain moral rights, and indeed, any Turing capable AI would also have moral duties as well as rights, regardless of its status as a conscious being (Sullins 2006).
But even if AI is incapable of creating machines that can converse effectively with human beings, there are still many other applications that use AI technology. Many of the information technologies we discussed above such as, search, computer games, data mining, malware filtering, robotics, etc. all utilize AI programming techniques. Thus it may be premature to dismiss progress in the realm of AI. Artificial Life

Artificial Life (ALife) is an outgrowth of AI and refers to the use of information technology to simulate or synthesize life functions. The problem of defining life has been an interest in philosophy since its founding. See the entry on life for a look at the concept of life and its philosophical ramifications. If scientists and technologists were to succeed in discovering the necessary and sufficient conditions for life and then successfully synthesize it in a machine or through synthetic biology, then we would be treading on territory that has significant moral impact. Mark Bedau has been tracing the philosophical implications of ALife for some time now and argues that there are two distinct forms of ALife and each would thus have different moral effects if and when we succeed in realizing these separate research agendas (Bedau 2004; Bedau and Parke 2009). One form of ALife is completely computational and is in fact the earliest form of ALife studied. ALife is inspired by the work of the mathematician John von Neumann on self-replicating cellular automata, which von Neumann believed would lead to a computational understanding of biology and the life sciences (1966). The computer scientist Christopher Langton simplified von Neumann’s model greatly and produced a simple cellular automata called “Loops” in the early eighties and helped get the field off the ground by organizing the first few conferences on Artificial Life (1989). Artificial Life programs are quite different from AI programs. Where AI is intent on creating or enhancing intelligence, ALife is content with very simple minded programs that display life functions rather than intelligence. The primary moral concern here is that these programs are designed to self-reproduce and in that way resemble computer viruses and indeed successful ALife programs could become as malware vectors. The second form of ALife is much more morally charged. This form of ALife is based on manipulating actual biological and biochemical processes in such a way as to produce novel life forms not seen in nature.

Scientists at the J. Craig Venter institute were able to synthesize an artificial bacterium called JCVI-syn1.0 in May of 2010. While media paid attention to this breakthrough, they tended to focus on the potential ethical and social impacts of the creation of artificial bacteria. Craig Venter himself launched a public relations campaign trying to steer the conversation about issues relating to creating life. This first episode in the synthesis of life gives us a taste of the excitement and controversy that will be generated when more viable and robust artificial protocells are synthesized. The ethical concerns raised by Wet ALife, as this kind of research is called, are more properly the jurisdiction of bioethics (see entry on Theory and Bioethics). But it does have some concern for us here in that Wet ALife is part of the process of turning theories from the life sciences into information technologies. This will tend to blur the boundaries between bioethics and information ethics. Just as software ALife might lead to dangerous malware, so too might Wet ALife lead to dangerous bacteria or other disease agents. Critics suggest that there are strong moral arguments against pursuing this technology and that we should apply the precautionary principle here which states that if there is any chance at a technology causing catastrophic harm, and there is no scientific consensus suggesting that the harm will not occur, then those who wish to develop that technology or pursue that research must prove it to be harmless first (see Epstein 1980). Mark Bedau and Mark Traint argue against a too strong adherence to the precautionary principle by suggesting that instead we should opt for moral courage in pursuing such an important step in human understanding of life (2009). They appeal to the Aristotelian notion of courage, not a headlong and foolhardy rush into the unknown, but a resolute and careful step forward into the possibilities offered by this research.

2.4.3 Robotics and Moral Values

Information technologies have not been content to remain confined to virtual worlds and software implementations. These technologies are also interacting directly with us through robotics applications. Robotics is an emerging technology but it has already produced a number of applications that have important moral implications. Technologies such as military robotics, medical robotics, personal robotics and the world of sex robots are just some of the already existent uses of robotics that impact on and express our moral commitments (see Capurro and Nagenborg 2009; Lin et al. 2011).

There have already been a number of valuable contributions to the growing field of robotic ethics (roboethics). For example, in Wallach and Allen’s book Moral Machines: Teaching Robots Right from Wrong (2010), the authors present ideas for the design and programming of machines that can functionally reason on moral questions as well as examples from the field of robotics where engineers are trying to create machines that can behave in a morally defensible way. The introduction of semi and fully autonomous machines into public life will not be simple. Towards this end, Wallach (2011) has also contributed to the discussion on the role of philosophy in helping to design public policy on the use and regulation of robotics.

Military robotics has proven to be one of the most ethically charged robotics applications. Today these machines are largely remotely operated (telerobots) or semi-autonomous, but over time these machines are likely to become more and more autonomous due to the necessities of modern warfare (Singer 2009). In the first decade of war in the 21st century robotic weaponry has been involved in numerous killings of both soldiers and noncombatants, and this fact alone is of deep moral concern. Gerhard Dabringer has conducted numerous interviews with ethicists and technologists regarding the implications of automated warfare (Dabringer 2010). Many ethicists are cautious in their acceptance of automated warfare with the provision that the technology is used to enhance just warfare practices (see Lin et al. 2008; Sullins 2009b) but others have been highly skeptical of the prospects of a just autonomous war due to issues like the risk to civilians (Asaro 2008; Sharkey 2011).

3. Information Technologies of Morality

A key development in realm of information technologies is that they are not only the object of moral deliberations but they are also beginning to be used as a tool in moral deliberation itself. Since artificial intelligence technologies and applications are a kind of automated problem solvers, and moral deliberations are a kind of problem, it was only a matter of time before automated moral reasoning technologies would emerge. This is still only an emerging technology but it has a number of very interesting moral implications which will be outlined below. The coming decades are likely to see a number of advances in this area and ethicists need to pay close attention to these developments as they happen. Susan and Michael Anderson have collected a number of articles regarding this topic in their book, Machine Ethics (2011), and Rocci Luppicini has a section of his anthology devoted to this topic in the Handbook of Research on Technoethics (2009).

3.1 Information Technology as a Model for Moral Discovery
Patrick Grim has been a longtime proponent of the idea that philosophy should utilize information technologies to automate and illustrate philosophical thought experiments (Grim et al. 1998; Grim 2004). Peter Danielson (1998) has also written extensively on this subject beginning with his book Modeling Rationality, Morality, and Evolution with much of the early research in the computational theory of morality centered on using computer models to elucidate the emergence of cooperation between simple software AI or ALife agents (Sullins 2005).

Luciano Floridi and J. W. Sanders argue that information as it is used in the theory of computation can serve as a powerful idea that can help resolve some of the famous moral conundrums in philosophy such as the nature of evil (1999, 2001). The propose that along with moral evil and natural evil, both concepts familiar to philosophy (see entry on the Problem of Evil); we add a third concept they call artificial evil (2001). Floridi and Sanders contend that if we do this then we can see that the actions of artificial agents
…to be morally good or evil can be determined even in the absence of biologically sentient participants and thus allows artificial agents not only to perpetrate evil (and for that matter good) but conversely to ‘receive’ or ‘suffer from’ it. (Floridi and Sanders 2001)
Evil can then be equated with something like information dissolution, where the irretrievable loss of information is bad and the preservation of information is good (Floridi and Sanders 2001). This idea can move us closer to a way of measuring the moral impacts of any given action in an information environment.

3.2 Information Technology as a Moral System

Early in the twentieth century the American philosopher John Dewey (see entry on John Dewey) proposed a theory of inquiry based on the instrumental uses of technology. Dewey had an expansive definition of technology which included not only common tools and machines but information systems such as logic, laws and even language as well (Hickman 1990). Dewey argued that we are in a ‘transactional’ relationship with all of these technologies within which we discover and construct our world (Hickman 1990). This is a helpful standpoint to take as it allows us to advance the idea that an information technology of morality and ethics is not impossible. As well as allowing us to take seriously the idea that the relations and transactions between human agents and those that exist between humans and their artifacts have important ontological similarities. While Dewey could only dimly perceive the coming revolutions in information technologies, his theory is useful to us still because he proposed that ethics was not only a theory but a practice and solving problems in ethics is like solving problems in algebra (Hickman 1990). If he is right, then an interesting possibility arises, namely the possibility that ethics and morality are computable problems and therefore it should be possible to create an information technology that can embody moral systems of thought.
In 1974 the philosopher Mario Bunge proposed that we take the notion of a ‘technoethics’ seriously arguing that moral philosophers should emulate the way engineers approach a problem. Engineers do not argue in terms of reasoning by categorical imperatives but instead they use:

… the forms If A produces B, and you value B, chose to do A, and If A produces B and C produces D, and you prefer B to D, choose A rather than C. In short, the rules he comes up with are based on fact and value, I submit that this is the way moral rules ought to be fashioned, namely as rules of conduct deriving from scientific statements and value judgments. In short ethics could be conceived as a branch of technology. (Bunge 1977, 103)

Taking this view seriously implies that the very act of building information technologies is also the act of creating specific moral systems within which human and artificial agents will, at least occasionally, interact through moral transactions. Information technologists may therefore be in the business of creating moral systems whether they know it or not and whether or not they want that responsibility.

3.4 Informational Organisms as Moral Agents

The most comprehensive literature that argues in favor of the prospect of using information technology to create artificial moral agents is that of Luciano Floridi (1999, 2002, 2003, 2010b, 2011b), and Floridi with Jeff W. Sanders (1999, 2001, 2004). Floridi (1999) recognizes that issues raised by the ethical impacts of information technologies strain our traditional moral theories. To relieve this friction he argues that what is needed is a broader philosophy of information (2002). After making this move, Floridi (2003) claims that information is a legitimate environment of its own and that has its own intrinsic value that is in some ways similar to the natural environment and in other ways radically foreign but either way the result is that information is on its own a thing that is worthy of ethical concern. Floridi (2003) uses these ideas to create a theoretical model of moral action using the logic of object oriented programming.
His model has seven components; 1) the moral agent a, 2) the moral patient p (or more appropriately, reagent), 3) the interactions of these agents, 4) the agent’s frame of information, 5) the factual information available to the agent concerning the situation that agent is attempting to navigate, 6) the environment the interaction is occurring in, and 7) the situation in which the interaction occurs (Floridi 2003, 3). Note that there is no assumption of the ontology of the agents concerned in the moral relationship modeled (Sullins 2009a)
There is additional literature which critiques and expands the idea of automated moral reasoning (Adam 2008; Anderson and Anderson 2011; Johnson and Powers 2008; Schmidt 2007; Wallach and Allen 2010).
While scholars recognize that we are still some time from creating information technology that would be unequivocally recognized as an artificial moral agent, there are strong theoretical arguments in favor of the eventual possibility and therefore they are an appropriate concern for those interested in the moral impacts of information technologies.

    •    Adam, A., 2002, “Cyberstalking and Internet pornography: Gender and the gaze,” Ethics and Information Technology, 4(2): 133–142.
    •    –––, 2008, “Ethics for things,” Ethics and Information technology, 10(2–3): 149–154.
    •    Anderson, M. and S. L. Anderson (eds.), 2011, Machine Ethics, Cambridge: Cambridge University Press.
    •    Arkin, R., 2009, Governing Lethal Behavior in Autonomous Robots, New York: Chapman and Hall/CRC.
    •    Arquilla, J., 2010, “Conflict, Security and Computer Ethics,” in Floridi 2010a.
    •    Asaro, P. 2008. “How Just Could a Robot War Be?” in Philip Brey, Adam Briggle and Katinka Waelbers (eds.), Current Issues in Computing And Philosophy, Amsterdam, The Netherlands: IOS Press, pp. 50–64.
    •    –––, 2009. “Modeling the Moral User: Designing Ethical Interfaces for Tele-Operation,” IEEE Technology & Society, 28(1): 20–24.
    •    Aycock, J. and J. Sullins, 2010, “Ethical Proactive Threat Research,” Workshop on Ethics in Computer Security Research (LNCS 6054), New York: Springer, pp. 231–239.
    •    Bartell, C., 2011, “Resolving the gamer’s dilemma,” Ethics and Information Technology, 14(1):11–16.
    •    Baase, S., 2008, A Gift of Fire: Social, Legal, and Ethical Issues for Computing and the Internet, Englewood Cliffs, NJ: Prentice Hall.
    •    Bedau, M., 2004, “Artificial Life,” in Floridi 2004.
    •    Bedau, M. and E. Parke (eds.), 2009, The Ethics of Protocells: Moral and Social Implications of Creating Life in the Laboratory, Cambridge: MIT Press.
    •    Bedau, M. and M. Traint, 2009, “Social and Ethical Implications of Creating Artificial Cells,” in Bedau and Parke 2009.
    •    Bostrom, N., 2008, “Why I Want to be a Posthuman When I Grow Up,” in Medical Enhancement and Posthumanity, G. Gordijn and R. Chadwick (eds), Berlin: Springer, pp. 107–137.
    •    Brey, P., 2008, “Virtual Reality and Computer Simulation,” in Himma and Tavanni 2008
    •    –––, 2010, “Values in Technology and Disclosive Computer Ethics,” in Floridi 2010a.
    •    Bunge, M. 1977, “Towards a Technoethics,” The Monist, 60(1): 96–107.
    •    Bynum, T., 2000, “Ethics and the Information Revolution,” Ethics in the Age of Information Technology, pp. 32–55, Linköping, Sweden: Center for Applied Ethics at Linköping University.
    •    –––, 2008, “Norbert Wiener and the Rise of Information Ethics,” in van den Hoven and Weckert 2008.
    •    Capurro, R., Nagenborg, M., 2009, Ethics and Robotics, [CITY]: IOS Press
    •    Christakis, D. A., 2009, “The effects of infant media usage: what do we know and what should we learn?” Acta Pædiatrica, 98 (1): 8–16.
    •    Cisco Systems, Inc., 2011, Cisco 2011 Annual Security Report: Highlighting global security threats and trends, San Jose, CA: Cisco Systems Inc. [available online]
    •    Coeckelbergh, M., 2007, “Violent Computer Games, Empathy, and Cosmopolitanism,” Ethics and Information Technology, 9(3): 219–231
    •    Cohen, D. D., C. Voss, M. J. D. Taylor, A. Delextrat, A. A. Ogunleye, and G. R. H. Sandercock, 2011, “Ten-year secular changes in muscular fitness in English children,” Acta Paediatrica, 100(10): e175–e177.
    •    Danielson, P., 1998, Modeling Rationality, Morality, and Evolution, Oxford: Oxford University Press.
    •    Dabringer, G., (ed.) 2010, Ethica Themen: Ethical and Legal Aspects of Unmanned Systems, Interviews, Vienna, Austria: Austrian Ministry of Defence and Sports. [available online]
    •    Denning, D., 2008, “The Ethics of Cyber Conflict,” In Himma and Tavanni 2008.
    •    Dodig-Crnkovic, G., Hofkirchner, W., 2011, “Floridi’s ‘Open Problems in Philosophy of Information’, Ten Years Later,” Information, (2): 327–359. [available online]
    •    Edgar, S.L., 2003, Morality and Machines, Sudbury Massachusetts: Jones and Bartlett.
    •    Epstein, R., 2007, “The Impact of Computer Security Concerns on Software Development,” in Himma 2007a, pp. 171–202.
    •    Epstein, L.S. 1980. “Decision-making and the temporal resolution of uncertainty”. International Economic Review 21 (2): 269–283.
    •    Ess, C., 2009, Digital Media Ethics, Massachusetts: Polity Press.
    •    Floridi, L., 1999, “Information Ethics: On the Theoretical Foundations of Computer Ethics”, Ethics and Information Technology, 1(1): 37–56.
    •    –––, 2002, “What is the Philosophy of Information?” in Metaphilosophy, 33(1/2): 123–145.
    •    –––, 2003, “On the Intrinsic Value of Information Objects and the Infosphere,” Ethics and Information Technology, 4(4): 287–304.
    •    –––, 2004, The Blackwell Guide to the Philosophy of Computing and Information, Blackwell Publishing.
    •    ––– (ed.), 2010a, The Cambridge Handbook of Information and Computer Ethics, Cambridge: Cambridge University Press.
    •    –––, 2010b, Information: A Very Short Introduction, Oxford: Oxford University Press.
    •    –––, 2011a, “Enveloping the World for AI,” The Philosopher’s Magazine, 54: 20–21
    •    –––, 2011b, The Philosophy of Information, Oxford: Oxford University Press.
    •    Floridi, L. and J. W. Sanders, 1999, “Entropy as Evil in Information Ethics,” Etica & Politica, special issue on Computer Ethics, I(2). [available online]
    •    –––, 2001, “Artificial evil and the foundation of computer ethics,” in Ethics and Information Technology, 3(1): 55–66. [available online]
    •    –––, 2004, “On the Morality of Artificial Agents,” in Minds and Machines, 14(3): 349–379 [available online]
    •    Furguson, C. J., 2007, “The Good The Bad and the Ugly: A Meta-analytic Review of Positive and Negative Effects of Violent Video Games,” Psychiatric Quarterly, 78(4): 309–316.
    •    Gelernter, D., 2007, “Artificial Intelligence Is Lost in the Woods,” Technology Review, July/August, pp. 62–70. [available online]
    •    Gibba, G. D., J. R. Baileya, T. T. Lambirtha, and W. Wilsona, 1983, “Personality Differences Between High and Low Electronic Video Game Users,” The Journal of Psychology, 114(2): 159–165.
    •    Grim, P., 2004, “Computational Modeling as a Philosophical Methodology,” In Floridi 2004.
    •    Grim, P., G. Mar, and P. St. Denis, 1998, The Philosophical Computer: Exploratory Essays in Philosophical Computer Modeling, MIT Press.
    •    Grodzinsky, F. S. and H. T. Tavani, 2002, “Ethical Reflections on Cyberstalking,” Computers and Society, 32(1): 22–32.
    •    Hansen, R. and J. Grossman, 2008, “Clickjacking,” SecTheory: Internet Security. [available online]
    •    Hickman, L. A. 1990, John Dewey’s Pragmatic Technology, Bloomington, Indiana: Indiana University Press.
    •    Himma, K. E. (ed.), 2007a, Internet Security, Hacking, Counterhacking, and Society, Sudbury Massachusetts: Jones and Bartlett Publishers.
    •    Himma, K. E., 2007b, “Hacking as Politically Motivated Digital Civil Disobedience: Is Hacktivisim Morally Justified?” In Himma 2007a, pp. 73–98.
    •    Himma, K. E., and H. T. Tavanni (eds.), 2008, The Handbook of Information and Computer Ethics, Wiley-Interscience; 1st edition
    •    Hongladarom, S., 2008, “Privacy, Contingency, Identity and the Group,”Handbook of Research on Technoethics. Vol. II, R. Luppicini and Rebecca Adell Eds. Hershey, PA: IGI Global, pp. 496–511.
    •    Ivory, J. D., 2006, “Still a Man’s Game: Gender Representation in Online Reviews of Video Games,” Mass Communication and Society, 9(1): 103–114.
    •    Johansson, M., 2009, “Why unreal punishments in response to unreal crimes might actually be a really good thing,” Ethics and Information Technology, 11(1): 71–79
    •    Johnson, D.G., 1985, Computer Ethics, Englewood Cliffs, New Jersey: Prentice Hall. (2nd ed., 1994; 3rd ed., 2001; 4th ed., 2009).
    •    Johnson D. G., and T. Powers, 2008, “Computers and Surrogate Agents,” In van den Hoven and Weckert 2008.
    •    Jones, T., 2011, “Techno-toddlers: A is for Apple,” The Guardian, Friday November 18. [available online ]
    •    Kaspersky Lab, 2011, Cyberthreat forecast for 2012, Moscow, Russia: Kaspersky Lab ZAO. [available online]
    •    Kurzweil, R., 2006. The Singularity is Near, New York: Penguin Press.
    •    Langton, C. G., (ed.), 1989, Artificial Life: the Proceedings of an Interdisciplinary Workshop on the Synthesis and Simulation of Living Systems, Redwood City: Addison-Wesley.
    •    Ledin G., 2005, “Not Teaching Viruses and Worms is Harmful” Communications of the ACM , 48(1): 144.
    •    Lessig, L., 1999, Code and Other Values of Cyberspace, New York: Basic Books.
    •    Levy, S., 1984, Hackers: Heroes of the Computer Revolution, New York: Anchor Press.
    •    Lin, P., K. Abney, and G. Bekey, 2011, Robot Ethics: The Ethical and Social Implications of Robotics, Cambridge: MIT Press.
    •    Lin, P., G. Bekey, and K. Abney, 2008, Autonomous Military Robotics: Risk, Ethics, and Design, Washington DC: US Department of the Navy, Office of Naval Research. [available online]
    •    Lovely, E., 2010, “Cyberattacks explode in Congress,” Politico, March 5, 2010. [available online]
    •    Lü, Yao-Hui, 2005, “Privacy and Data Privacy Issues in Contemporary China,” Ethics and Information Technology, 7(1): 7–15
    •    Ludlow, P. and M. Wallace, 2007, The Second Life Herald: The Virtual Tabloid that Witnessed the Dawn of the Metaverse, Cambridge, MA: MIT Press.
    •    Luck, M., 2009, “The gamer’s dilemma: An analysis of the arguments for the moral distinction between virtual murder and virtual paedophilia,” Ethics and Information Technology, 11(1): 31–36.
    •    Luppicini, R. and R. Adell (eds.), 2009, Handbook of Research on Technoethics, Idea Group Inc. (IGI).
    •    Magnani, L., 2007, Morality in a Technological World: Knowledge as Duty, Cambridge, Cambridge University Press.
    •    Mason, R. O., 1986, Four ethical issues of the information age. MIS Quarterly, 10(1): 5–12.
    •    Markoff, J., 2005, What the Dormouse Said: How the 60s Counterculture Shaped the Personal Computer Industry, New York: Penguin.
    •    Manion, M. and A. Goodrum, 2007, “Terrorism or Civil Disobedience: Toward a Hacktivist Ethic,” in Himma 2007a, pp. 49–59.
    •    McMahon, J. M. and R. Cohen, 2009, “Lost in cyberspace: ethical decision making in the online environment,” Ethics and Information technology, 11(1): 1–17.
    •    Midgley, M., 1992, Science as Salvation: a modern myth and its meaning, London: Routledge.
    •    Moor, J. H., 1985, “What is Computer Ethics?” Metaphilosophy, 16(4): 266–275.
    •    –––, 2005, “Why We Need Better Ethics for Emerging Technologies,” Ethics and Information Technology, 7(3): 111–119. Reprinted in van den Hoven and Weckert 2008, pp. 26–39.
    •    Moore, Gordon E. 1965. “Cramming more components onto integrated circuits”. Electronics, 38(8): 114–117. [available online]
    •    Neumann, P. G., 2004, “Computer security and human values,” Computer Ethics and Professional Responsibility, Malden, MA: Blackwell
    •    Nissenbaum, H., 1997. “Toward an Approach to Privacy in Public: Challenges of Information Technology,” Ethics and Behavior, 7(3): 207–219. [available online]
    •    –––, 1998. “Values in the Design of Computer Systems,” Computers and Society, March: pp. 38–39. [available online]
    •    –––, 1999, “The Meaning of Anonymity in an Information Age,” The Information Society, 15: 141–144.
    •    –––, 2009, Privacy in Context: Technology, Policy, and the Integrity of Social Life, Stanford Law Books: Stanford University Press.
    •    Northcutt, S. and C. Madden, 2004, IT Ethics Handbook: Right and Wrong for IT Professionals, Syngress.
    •    Parrish, J., 2010, “PAPA knows best: Principles for the ethical sharing of information on social networking sites,” Ethics and Information Technology, 12(2): 187–193.
    •    Pettit, P., 2009, “Trust, Reliance, and the Internet,” In van den Hoven and Weckert 2008.
    •    Plato, “Phaederus,” in Plato: The Collected Dialogues, E. Hamilton and H. Cairns (eds.), Princeton: Princeton University Press, pp. 475–525.
    •    Powers, T., 2011, “Prospects for a Kantian Machine,” in Anderson and Anderson 2011.
    •    Purcell, M., 2008, “Pernicious virtual communities: Identity, polarisation and the Web 2.0,” Ethics and Information Technology, 10(1): 41–56.
    •    Reynolds, G., 2009, Ethics in Information Technology, (3rd ed.), Course Technology.
    •    Russell, S. and P. Norvig, 2010, Artificial Intelligence: A Modern Approach, (3rd ed.), Massachusetts: Prentice Hall.
    •    Schmidt, C. T. A., 2007, “Children, Robots and… the Parental Role,” 17(3): 273–286.
    •    Schulzke, M., 2010, “Defending the Morality of Violent Video Games,” Ethics and Information Technology, 12(2): 127–138.
    •    Shannon, C.E., 1948, “A Mathematical Theory of Communication”, Bell System Technical Journal, 27(July, October): 379–423, 623–656. [available online]
    •    Shannon, C. E. and W. Weaver, 1949, The Mathematical Theory of Communication, University of Illinois Press.
    •    Sharkey, N.E. 2011, “The automation and proliferation of military drones and the protection of civilians,” Journal of Law, Innovation and Technology, 3(2): 229–240.
    •    Singer, P. W., 2009, Wired for War:The Robotics Revolution and Conflict in the 21st Century, Penguin (Non-Classics); Reprint edition.
    •    Siraj-Blatchford, J., 2010, “Analysis: ‘Computers Benefit Children’,” Nursery World, October 6. [available online]
    •    Soukup, C., 2007, “Mastering the Game: Gender and the Entelechial Motivational System of Video Games,” Women’s Studies in Communication, 30(2): 157–178.
    •    Søraker, Johnny Hartz, 2012, “How Shall I Compare Thee? Comparing the Prudential Value of Actual Virtual Friendship,” Ethics and Information technology, DOI: 10.1007/s10676-012-9294-x. [available online]
    •    Spafford, E.H., 1992, “Are computer hacker break-ins ethical?” Journal of Systems and Software 17(1):41–47.
    •    –––, 2007, “Are Computer Hacker Break-ins Ethical?” in Himma 2007a, pp. 49–59.
    •    Spinello, R. A., 2001, Cyberethics, Sudbury, MA: Jones and Bartlett Publishers. (2nd ed., 2003; 3rd ed., 2006; 4th ed., 2010).
    •    –––, 2002, Case Studies in Information Technology Ethics, Prentice Hall. (2nd ed.).
    •    Sprenger P., 1999, “Sun on Privacy: ‘Get Over It’,” Wired, January 26, 1999. [available online]
    •    Sullins, J. P., 2000, “Transcending the meat: immersive technologies and computer mediated bodies,” Journal of Experimental and Theoretical Artificial Intelligence, 12(1): 13–22.
    •    –––, 2005, “Ethics and Artificial life: From Modeling to Moral Agents,” Ethics and Information technology, 7(3): 139–148. [available online]
    •    –––, 2006, “When Is a Robot a Moral Agent?” International Review of Information Ethics, 6(12): 23–30. [available online]
    •    –––, 2009a, “Artificial Moral Agency in Technoethics,” in Luppicini and Adell 2009.
    •    –––, 2009b, “Telerobotic weapons systems and the ethical conduct of war,” APA Newsletter on Philosophy and Computers, P. Boltuc (ed.) 8(2): 21.
    •    –––, 2010, “Rights and Computer Ethics,” in Floridi 2010a.
    •    –––, forthcoming, “Deception and Virtue in Robotic and Cyber Warfare,” Presentation for the Workshop on The Ethics of Informational Warfare, at University of Hertfordhire, UK, July 1–2 2011
    •    Tavani, H. T., 2007, “The Conceptual and Moral Landscape of Computer Security,” in Himma 2007a, pp. 29–45.
    •    –––, 2010, Ethics and Technology: Controversies, Questions, and Strategies for Ethical Computing, (3rd ed.), Wiley.
    •    Tavani, H. and J. Moor, 2004, “Privacy Protection, Control of Information, and Privacy-Enhancing Technologies,” in Readings in Cyberethics, second edition, Spinello, R. and Tavani, H. (eds.), Sudsbury: Jones and Bartlett.
    •    Taylor, N., J. Jensona, and S. de Castellb, 2009. “Cheerleaders/booth babes/ Halo hoes: pro-gaming, gender and jobs for the boys,” Digital Creativity, 20(4): 239–252.
    •    Turing, A. M., 1948, “Machine Intelligence”, in B. Jack Copeland, The Essential Turing: The ideas that gave birth to the computer age, Oxford: Oxford University Press.
    •    –––, 1950, “Computing Machinery and Intelligence”, Mind, 59(October): 433–460. [available online]
    •    Vallor, S., 2010, “Social Networking Technology and the Virtues,” Ethics and Information Technology, 12(2, Jan. 6): 157–170.
    •    –––, 2011, “Flourishing on Facebook: Virtue Friendship and New Social Media,” Ethics and Information technology, 1388–1957, pp. 1–15, Netherlands: Springer.
    •    Van den Hoven, J. and J. Weckert (eds), 2008, Information Technology and Moral Philosophy, Cambridge: Cambridge University Press.
    •    Von Neumann, J., 1966, Theory of Self Reproducing Automata, edited and completed by A. Burks, Urbana-Champaign: University of Illinois Press.
    •    Wallach, W., 2011. From Robots to Techno Sapiens: Ethics, Law and Public Policy in the Development of Robotics and Neurotechnologies, Law, Innovation and Technology, 3(2): 185–207.
    •    Wallach, W. and C. Allen, 2010, Moral Machines: Teaching Robots Right from Wrong, Oxford: Oxford University Press.
    •    Warschauer, M., 2003, Technology and Social Inclusion: Rethinking the Digital Divide, Cambridge: MIT Press.
    •    Weckert, John, 2007, “Giving and Taking Offence in a Global Context,” International Journal of Technology and Human Interaction, 3(3): 25–35.
    •    Westin, A., 1967, Privacy and Freedom, New York: Atheneum.
    •    Wiener, N., 1950,The Human Use of Human Beings, Cambridge, MA: The Riverside Press (Houghton Mifflin Co.).
    •    –––, 1961, Cybernetics: Or Control and Communication in the Animal and the Machine, 2nd revised ed., Cambridge: MIT Press. First edition, 1948.
    •    Woodbury, M. C., 2010, Computer and Information Ethics, 2nd edition; 1st edition, 2003, Champaign, IL: Stipes Publishing LLC.
Academic Tools

How to cite this entry.

Preview the PDF version of this entry at the Friends of the SEP Society.

Look up this entry topic at the Indiana Philosophy Ontology Project (InPhO).

Enhanced bibliography for this entry at PhilPapers, with links to its database.
Other Internet Resources Section
    •    Clarke, R., 2000, “Information wants to be Free…”, unpublished manuscript.
    •    O’Reilly, T., 2005, “What is Web 2.0: Design Patterns and Business Models for the Next Generation of Software”.
Related Entries
Aristotle, General Topics: ethics | artificial intelligence: logic and | computing: and moral responsibility | Dewey, John: political philosophy | ethics, biomedical: theory | evil: problem of | information technology: phenomenological approaches to ethics and | life | pornography: and censorship | property: intellectual | Turing test
Copyright © 2012 by
John Sullins

comments (0)