feotakahari: (Default)
 The principle of double effect is based on the idea that there is a morally relevant difference between an “intended” consequence of an act and one that is foreseen by the actor, but not calculated to achieve his motive. So, for example, the principle is invoked to consider the terror bombing of non-combatants having as its goal victory in a legitimate war morally out of bounds, while holding as ethically in bounds an act of strategic bombing that similarly harms non-combatants with foresight as a side effect of destroying a legitimate military target. 

“It wasn’t like I intended to blow up that orphanage! It just happened to be nearby!”

Seriously, someone introduce these fuckers to the concept of depraved indifference.

feotakahari: (Default)
 The problem with doing horrible shit as a deterrent is that if you don’t deter anyone, you’re just doing horrible shit.

Assuming a liberal audience, the easiest way to argue this is to gesture in the direction of the anti-immigration crowd. Jeff Sessions will never run out of horrible shit to do to illegal immigrants to “deter” them, and it will never fucking work, because what else are they gonna do, stay in their home countries and wait for the gangs to murder them? Even when the border patrol destroys water supplies so people will die in the desert, sufficiently desperate people will see it as a lower risk than staying.

But a surprisingly large fraction of liberals think doing horrible shit is a deterrent when you do it in wartime. They say that if you murder enough people, no one will want to fight a war with you. This only proves that liberals need to read more fantasy novels, because recruiters and terrorists have clearly been reading them. “Join us to become a hero and avenge all the people our enemies horribly murdered!” The more horrible shit you do, the stupider the things people will do in response, so long as they can fit it into a revenge narrative.

Honestly, I think there’s a cachet factor. People like the idea of being a strong-minded person who makes tough choices, and that means they need to come up with a tough choice that they can take. But before you talk about what must be done for the greater good, you should consider what actually works.

feotakahari: (Default)
 I’m going to share a secret that protects you from so many Bad Takes: you don’t have to choose just one person in a moral quandary to be “you.”

I originally got this from Judith Thomson’s “famous violinist” argument, which is meant to prove that abortion is okay. She proposes that “you” wake up one day with a famous violinist plugged into your vital organs, and unless you leave him plugged in for nine months, he will die. From there, she argues that you wouldn’t want to have a famous violinist plugged into your vital organs for nine months, so therefore abortion is okay.

What she never considers is that “you” could wake up one day to discover that you’re a famous violinist who’s plugged into someone’s vital organs and will die unless you stay plugged in for nine months. In the magic of Thought Experiments, where anything can happen, it’s all equally valid! And if you don’t want to be unplugged early and die, then you can argue from there that abortion is wrong.

The real issue here is that “what benefits me personally” is not the sole valid definition of morality. But it’s also useful to remember that “me personally” doesn’t have to be a single fixed entity, and it doesn’t have to be whoever the person making the argument decides it ought to be. It turns out the Veil of Ignorance is actually a useful idea, folks! You can argue from the violinist’s perspective, and the perspective of the person plugged into the violinist, and the lawmaker deciding whether it should be legally required to save the violinist, and the surgeon who would have to “unplug” the violinist, and everyone else who’s relevant!

(See also: a good 50% of the arguments defending the movie Passengers, and maybe 20% defending the video game The Last of Us. “If you were in the protagonist’s position, you would do exactly what he did!” But what if I was in some other position?)

feotakahari: (Default)
L.B. Lee linked me to an essay about theories of selfhood and the morality of DID therapy. This is priceless, and not in a good way:

“I can detect little concern within the psychiatric community, or indeed the general public, over the ethical probity of restoration and integration. To the best of my knowledge, no discussion of moral status has even raised the question of whether alters might qualify for a right to continued existence. But proponents of the purely psychological accounts of full moral status ‒ accounts that tend to deny neonates a right to continued existence ‒ would be committed to condemning integration and restoration should the strong model it seems, be vindicated. On the face of it, this would appear to be an objection to such accounts of full moral status.”

This is what I’m imagining here:

“I can detect little concern within the slaveholding community, or indeed the general public, over the ethical probity of slaveholding. To the best of my knowledge, no discussion of moral status has even raised the question of whether slaves might qualify for a right to freedom. But proponents of the purely psychological accounts of full moral status would be committed to condemning slavery should the model be vindicated. On the face of it, this would appear to be an objection to such accounts of full moral status.”

If you’re gonna reach a moral conclusion, then fucking own it.

feotakahari: (Default)
 I avoid casting judgment on abortion, because it’s not a concept Utilitarianism was built to handle.

Suppose aborting a fetus decreases utility, because that fetus would have grown up to have a happy life. By the same token, aren’t you decreasing utility by not getting pregnant in the first place? This would seem to obligate people to keep having children up until the point where having children decreases total happiness, and that would create a lot of unhappiness for people who aren’t equipped to raise children or simply don’t want to.

Now suppose aborting a fetus increases utility, whether because the child won’t have a happy life, or for any other reason that gets away from the previous problem. Is killing the infant at birth, as some cultures do, any less moral than simply aborting? What if you’ve raised a child for five years, but realize you’ve made a mistake that will decrease total happiness–rather than continuing to raise the child, is it better to immediately kill it?

I once saw immanentizingeschatons​ and fnord888 trying to resolve this problem. They talked about “population ethics” and “counterpart theory” and lots of other phrases I’d never heard before, but it didn’t look like they were making much headway. Personally, I just stay out of the way. If you judge that an abortion is the right thing to do in your case, then I figure you know more about your life and your values than I do.

feotakahari: (Default)
 Originally written as a response to this post.

This interests me, because I was also bullied in school, and I turned out very untrusting of social contracts in general.

At the school I went to, there was always one student in each class who was the unofficial designated victim. So long as the bullies only targeted this one student, they would never be punished or made to stop, and the victim would often be punished if they tried to speak up. I think my homeroom teacher chose me because of my different cultural background–I wasn’t used to figurative language and commands phrased as questions, and she interpreted my misunderstanding as deliberate defiance and mockery. When I left, my friend switched to being the new victim because she wore cheap clothes, and I’ve heard of other victims who probably had behavioral disorders.

To be clear, there were students who would have at least attempted to be bullies in any school. But in a healthier environment, they would have been made to stop, as opposed to only being reprimanded when they targeted students other than me. And in a healthier environment, students who were nice and friendly wouldn’t have internalized the idea that bullying me was okay. Even if they never showed any inclination towards bullying, they still targeted me sometimes, because they were acting within a social structure where bullying me seemed like a fun and harmless thing to do.

In retrospect, I suppose I could have distinguished between explicit and implicit contracts like OP did. But that was never a thought that crossed my mind! To me, the system that allowed me to be bullied was a social contract, just as if the teachers had created an explicit rule saying “It’s okay to bully Feo.” Instead, I dug into the idea of the students who bullied me despite not normally being bullies, and the ways in which their normal instinct not to be bullies was stifled. I wouldn’t learn the term until years later, but I was trying to build a moral code that would be resistant to the Lucifer effect.

This is how someone who’s fundamentally against sacrificing one person for the good of the many arrived at Utilitarianism, which is so often criticized for allowing the sacrifice of one person for the good of the many! When you take it for granted that any system of specific rules can potentially lead to sacrificing someone, and that people won’t even consider it a bad thing so long as the sacrifice follows the rules, then the only thing you’re left with is a system where sacrificing people is always worse than helping everyone.

In retrospect, I wonder how I would have turned out if I’d concluded that we just needed explicit rules saying that bullying was always wrong. I don’t think that’s a conclusion I would come to now, though. There are people in this world who are very good at interpreting rules to mean whatever they want to mean, and there are other people who listen to their interpretation and think following that interpretation means they must be morally pure. The only thing I can think of to do is to take their rules away.

feotakahari: (Default)
 You may be familiar with Saint Anselm’s argument that God must exist because God is by definition perfect, and things that exist are more perfect than things that don’t. I believe I’ve identified a variant, rarely expressed but often implied as an underlying assumption: God must be important because God is by definition awe-inspiring, and things that are important are more awe-inspiring than things that are not.

To try to explain my own logic here, I believe that the absolute basis of morality is not something that can be objectively proved. If you have a basic idea of what’s moral, you can think logically about how that would be applied in different situations. And if you and I share a basic idea of what’s moral, we can use that to build shared principles. But if your basic idea of what’s moral is fundamentally different from mine, all I can say is that your idea horrifies me and I wouldn’t want to live in a society based around it. I can’t “prove you wrong” in a logical sense.

This is not a belief I share with people who believe in divine command morality, nor is it an idea they typically understand when I try to explain it. As far as they’re concerned, what’s moral is what God says because God is the one who said it. When I try to get them to explain where they’re coming from, they fall back on an argument from authority: God is like a father or a king, so what God says is more important than what you say, just like what your father or your king says is more important than what you say.

Besides invoking my intense dislike of fathers and kings, this argument sidesteps the issue of why this “God” person is important in the first place. I don’t inherently or objectively place moral value on anyone, whether they’re a god or not. I subjectively value them based on my own inclinations. Saying “this necessarily must be important” is no more logical or sensical than “this necessarily must exist,” and it has no value in a serious discussion.

(I have seen another, more logical variant of this: “God is smarter than you and wants you to be happy, so if you want to be happy, you should listen to what God says.” My only response to this is that if God wants me to be happy, he’s not doing a very good job of it.)

feotakahari: (Default)
 The Righteous Mind by Jonathan Haidt is a book I both suspect and fear is right. Suspect because it matches my experience, and fear because it means this entire blog is a waste of my time.

Haidt’s argument, which he supports through studies, is that human beings do not create logical arguments before they decide whether something is moral or immoral. Each person has various “tastebuds” that react to the nature of a specific action, and these determine whether your emotional reaction is positive or negative. If it “tastes bad” to you, you’ll try to create a logical argument for why it’s wrong, but you’ll keep insisting it’s wrong even if all your arguments are proven false.

As an example, suppose you’re told the story of someone who used an old flag to clean a toilet. If your “tastebuds” say that disrespecting tradition is bad, you’ll immediately decide that this is wrong, regardless of any logic involved. If you don’t have a “tastebud” for disrespecting tradition, you won’t see a problem at all. To the “tasteful” person, the “tasteless” seems blind, while to the “tasteless” person, the “tasteful” person is reacting to nothing.

Haidt outlines six scales, but observes that different people put different values on different scales. According to his research, people who are very liberal often value only one scale, care vs. harm. (In other words, Utilitarianism.) People who are very conservative often value all six, and they rank care vs. harm lower than other scales like authority vs. subversion. Hence why conservatives have so many values that make no sense to liberals, and why they’re so baffled that liberals don’t share those values.

From there, Haidt does a stupid, stupid thing. He jumps from describing what is true (different values exist) to saying what he thinks should be true (all values should be equally treasured.) He says conservatives are the ones with the right morality, and liberals have it all wrong because their morality is incomplete. David Hume rolls over in his grave as yet another philosopher flings himself headlong into the is-ought gap.

Myself, I go the opposite route. Utilitarianism places its value on a scale that almost everyone agrees is important. It creates a common ground for what people say “ought” to be true, which they can build off of without having to jump across what “is” true. If you and I both think that it ought to be true that people are made happy, and I say that marriage rights make gay people happy, you can at least understand where I’m coming from. But if you say that it ought to be true that people respect religion, and your religion says it’s wrong for gay people to marry–well, I can’t help but find that statement to be in questionable taste.

feotakahari: (Default)
 There’s a book called The Lathe of Heaven that’s about how Utilitarianism is bad and wrong. (More directly, it’s about how trying to overwrite reality with dreams will turn everyone’s skin gray and make aliens invade the moon, but the general message is that Utilitarianism is bad and wrong.) The Standard Utilitarian Villain has a monologue about how if you found someone dying of snakebite on the side of the road, and you had antivenom, you would save them. The protagonist responds that the person you saved might go on to kill four people, and you have no way of knowing whether you did the right thing or not.

Besides making me hope Ursula K. Le Guin never finds me dying of snakebite, this argument disappoints me in its neglect of probability. It’s possible from your perspective that the dying person is a murderer, and it’s also possible that they’re not. But if we assume that less than 25% of randomly chosen people will kill four other people, then saving the dying person is a net gain.

I believe all ethics relates back to probability, although most ethical systems do their best to hide their work. A rule saying “don’t lie” comes from a background in which lying repeatedly had negative results, and the probabilities worked out in favor of telling the truth. On the other hand, a community that repeatedly had negative interactions with the outside world might create the rule “don’t trust outsiders,” with lying and misleading outsiders as the tactic supported by probability. But these rigid rules lack the flexibility of an approach that directly considers probability in the moment. The more room you have to consider everything that relates to the case at hand, the better your chances at making the right choice. (Is the dying person carrying a bloody axe? But what if it’s blood from the snake–is there a dead snake nearby? And do you know anything about why they were on this road in the first place?)

Utilitarianism is not a moral system for making the right choice, in that it’s a system that acknowledges that you don’t know the right choice. It’s a system for making choices that are moderately more likely to be right, and will hopefully add up over time. As a Utilitarian, there will be times when you screw up and make the world worse! All you can do is make a judgment and take a gamble.

feotakahari: (Default)
 If it seems like I bash on property rights a lot, it’s because I’ve never been able to properly fit them into Utilitarianism. I’m against reducing utility, but property rights are often about when and whether utility should be moved around.

To be fair, stealing inherently reduces social order and security. You can’t feel comfortable saving money if you expect that this money may someday be stolen. It’s also true that people who steal from “the wealthy” are often stealing from people who are also economically struggling (e.g. poor people in Nigeria who think they’re morally justified in ripping off poor people in America, because Americans must be capable of recovering from financial loss, right?) I also have no particular grudge against wealthy people, and there are people who put their wealth to good use helping others.

On the other hand, I can’t reasonably argue that a dollar for a man who has a million dollars is worth as much as a dollar for a man who only has one dollar. If you can’t afford to pay your rent, having just a little more money means a big increase in your happiness. If you’re in a stable position and have satisfied your basic needs, there are only so many things you can do with a little more money in order to become happier. In that sense, free trade commonly produces situations in which resources are not optimally distributed for maximum happiness.

This doesn’t mean I’m encouraging you to go out and try to be Robin Hood. It’s admirable to be Galileo, but a lot of the people I’ve seen invoke Galileo have been misusing and abusing him. And while I admire Martin Luther King Jr.’s efforts to be a “gadfly” and create situations where people were forced to acknowledge and think about racism, a lot of the people on Tumblr who try to imitate his style go too far. Yet still, if you want to go rip off Martin Shkreli, I can’t really argue that you’re doing the wrong thing. Just create more utility with the money than he is.

Side note 1: This is where I really struggle with Internet piracy. Assume you have finite spending money for either a restaurant or a video game. I can argue that pirating the video game is immoral because the game developers might go out of business. But I can also argue that buying the game is immoral, because not spending money at the restaurant may make the restaurant go out of business. Maybe Utilitarianism just isn’t meant to handle artificial scarcity.

Side note 2: In theory, a progressive income tax redistributes money from the rich to programs that can help the poor. Since the tax is expected, it doesn’t reduce social order. Good luck trying to close all the loopholes in the current American tax code, though.

feotakahari: (Default)
 The Case of the Speluncean Explorers is a brilliant exploration of what the law should do and what purpose it serves, and I urge you to go read it right now. 

Finished? Good.

A lot of the things I could say about this story have already been said by wiser minds than me, but I want to pick at a specific argument by Justice Foster. His statements are convoluted and don’t lend themselves to direct quotation, but the gist is that you can’t punish people for breaking the law when the law was incapable of protecting them. In a normal situation, there are all manner of social institutions meant to preserve life and prevent situations where killing is necessary. None of those were available in an isolated cave, and killing turned out to be the only way any of the explorers could get out alive. In Foster’s terms, the “social compact” failed them, so they had to make up their own rules to survive long enough to return to society.

This may make intuitive sense at first glance, and it’s part of the basis behind self-defense laws. When someone’s coming at you with a knife, you may not have the time or the opportunity to call the police and resolve the situation without violence. The explorers had time to think, but starvation would have come for them long before the forces of law and order could, and their killing of Whetmore could be considered a necessary defense of their lives.

Now consider a different case. In Full Tilt by Neal Shusterman, the narrator relates the story of a group of people on a sinking boat. They don’t have enough life jackets for everyone, so a child gets to grab one first. An adult steals the child’s life jacket and lets her drown. He’s convicted of depraved-heart murder, and the narrator views this as a just consequence of his actions.

There are obvious differences between the two cases. As a Utilitarian, I have my own view on which differences matter, just as you may have yours. I simply wish to argue that Foster’s view is incomplete. If you think that the man on the boat deserved to be convicted, then there must be some moral principle that applies even when the law can’t save you and there’s no one you can rely on.

feotakahari: (Default)
 The Ones Who Walk Away From Omelas is a classic among heavy-handed allegories. Guardians of the Flame is a cult classic at best, most notable for being the codifier of the “trapped in a Dungeons and Dragons campaign” cliche. But a single scene in Guardians answers questions Omelas never bothered to ask, and I believe it holds the key to why the society in Omelas is wrong.

Omelas needs little in the way of a plot summary, inasmuch as it has little in the way of a plot. It proposes a society in which every person save one is happy and content, but that one lives in pain and agony. Without that one’s suffering, it would not be possible for the others to live so happily. Most people accept this, but a few choose to leave the city rather than live under such a contract. 

A scene in Guardians finds the protagonist in the sewage pit of a wealthy and opulent city. Living so finely produces a tremendous amount of waste, enough to overflow the pit were it not somehow disposed of. In place of laborers, a single baby dragon is chained in the pit, forced to burn away the waste day and night so that it does not drown in sewage. The people of the city seldom go near the dragon, nor do they listen to what he has to say. But the protagonist breaks the dragon’s chains, letting him free and allowing the city streets to fill with waste.

Omelas is light and airy, a thought experiment without context. There’s no explanation of why one person needs to suffer, because an explanation isn’t the point. Guardians, by contrast, is grounded in knowledge of exactly what’s happening and exactly why it’s tolerated. This means that Guardians can make a point Omelas isn’t equipped to discuss: creating utility is not the same as moving utility around.

Maybe you would be unhappy if you had to pick fruit in the hot sun. But if someone else picks the fruit for you to eat, their utility loss doesn’t become zero just because it isn’t your utility loss. It’s not necessarily a gain in utility if someone other than you washes your clothes, or if someone other than you cooks the food that you eat. You’re not assembling your own iPhone or mining the diamond for your wedding ring, but you still have to remember that there are actual human beings doing those tasks, and that it matters whether those people are exploited.

There’s no solution for Omelas, but Guardian’s city is fixable, and that starts with letting the streets flood. When the citizens are reminded that waste exists, they can discuss why there’s so much waste and how to have less of it. There will always be some waste that needs to be burned, but if citizens of the community are burning it, there can be an actual discussion about how much waste should be made and what level of consumption will lead to the most total happiness. It’s not as tidy and out-of-sight as a chained dragon, but it means no one has to suffer as much as the dragon had to suffer–and maybe, just possibly, our own lowest-paid laborers don’t need to be as poor and ill-cared-for as they currently are.

(Side note: this is one of the few points where I see eye to eye with Jessa Crispin. She makes some great points about how “self-care feminism” often means paying poor people a pittance to care for you, then pretending you don’t have or need support from anyone else.)

feotakahari: (Default)
 “Your baby is tied to a timebomb. You have the terrorist. He tells you you have 1 hour. Do you #torture him to find your baby or let it die?”–Lee Hurst

This is known as the ticking time bomb scenario. The most common formulation is that the bomb will destroy a city, which is less personal and easier to evaluate. If Hurst has no objections, let’s work with that instead. (If Hurst does have objections, let’s work with that anyway.)

Assuming I’m certain of success and there’s no other solution, I would laud torturing a terrorist to prevent a bomb from destroying a city. Lee Hurst can have that one.

Assuming I’m certain of success and there’s no other solution, I would laud torturing a terrorist’s young child to prevent a bomb from destroying a city. Let’s give Hurst that one, too.

Assuming I’m certain of success and there’s no other solution, I would laud killing half a city to prevent a bomb from destroying the whole city. Let’s throw that in Hurst’s face.

This is what you get when you draw your moral dilemmas too narrowly. In the world we actually live and act in, you’re not certain you can torture anything out of this person in the span of one hour. In the real world, you also don’t know that there’s no other way you or anyone else could possibly find the bomb. And from past history, there’s a very high chance that real-world you doesn’t even know if the person you’re torturing has any information about where the bomb is located! Real-world you doesn’t know if, once they tell you where the bomb is, they’re lying to bait you into wasting time and resources. Real-world you doesn’t know if they’re making up the first location that comes to mind so you’ll stop torturing them for at least a little while. Odds are that when real-world you is arguing about whether folks over at Guantanamo should be allowed to torture people, real-world you has never been present at a torture session and never seen what torturers actually do

So if you’re going to argue for torture, don’t pretend this is an episode of 24. Talk about what’s actually happening in the world we live in, because that’s where prisoners in many countries are currently being tortured while you argue your hypotheticals.

feotakahari: (Default)
 A problem in philosophy textbooks: A small number of people fight in an arena for the entertainment of the masses. Some of these fighters are injured or die. Is it moral to end the fights and deprive the masses of their entertainment?

A problem we’re currently facing: Football players give themselves brain damage for the entertainment of the masses. What methods, ranging from improved safety techniques to bans on particular tactics, should be used to minimize injury?

A problem in philosophy textbooks: Several people have been murdered, and a mob is seeking vengeance. People will die if they’re not satisfied. Is it moral to accuse some random stranger of the crime, letting him be killed so others won’t be?

A problem we’re currently facing: People are murdered by terrorists, and the public demands protection. This may involve anything from bombing the living hell out of random uninvolved brown people, to outreach and awareness in communities where terrorists might be recruited, to bribing known terrorists so they’ll spill info on their colleagues. What actions should be taken to ensure that citizens both feel safe and are safe?

A problem in philosophy textbooks: a trolley is out of control and will kill six people. You can pull a lever to redirect it, killing one person. Do you pull the lever?

A problem we’re currently facing: Hospitals need funds to treat their patients. Police need funds to try to reduce the crime rate. Funds are needed for pollution cleanup, for fire prevention, for schools, for supporting the unemployed, for everything you can imagine. How should we distribute the money? For that matter, what moral values should we use to determine the best way of distributing the money?

This is one of the reasons I don’t put much stock in ethics thought experiments. The typical approach is to create an extreme situation, without any room for alternate approaches, and then say the code of ethics that’s “right” is the one that gives the most normal, everyday answer. But those aren’t the questions that matter. Regardless of what values you follow, the questions you need to address are the ones you actually have to deal with.

Profile

feotakahari: (Default)
feotakahari

July 2025

S M T W T F S
   1 2 3 45
6 78 9 10 11 12
13 14 1516171819
20212223242526
2728293031  

Syndicate

RSS Atom

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags
Page generated Jul. 17th, 2025 12:49 am
Powered by Dreamwidth Studios