feotakahari: (Default)
Thinking about that Libertarian I argued with who said stealing is inherently immoral even if you’re starving, but also said he would steal food if he was starving. Pressured, he eventually said someone else starving is “a you problem.”
feotakahari: (Default)
I don’t know where people are getting the idea that college ethics classes teach you to be against fascism. If you want antifascism, you need to get it from college political orgs.
feotakahari: (Default)
Shit, it’s fucking virtue ethics. “Morality is about being the kind of person who embodies virtues. If you spank someone, you’re not embodying virtue.” Fuck you and the ancient Greek chariot you rode in on.
feotakahari: (Default)
A fun challenge: examine or imagine someone who has a consistent moral code, but doesn’t have any values on the care/harm axis. Like that guy on Fundies Say the Darndest Things who thinks the only value that matters is obedience to authority. How badly could you fuck them over without doing anything to them that contradicts their moral values?
feotakahari: (Default)
Kirsten Wright is what I like to call a bank account Utilitarian. She “deposits” good done in the world, and “withdraws” bad, and as long as the good outweighs the bad, she isn’t ethically bankrupt. Maybe she could do a little less bad if she cared more, but the bad is still hers to withdraw because the good covers it. (And sometimes she takes out a loan—“this will all be worth it in the end.”)
feotakahari: (Default)
I don’t talk about my ethics much anymore, but I think what I mean when I talk about ethics is different from what other people mean. I think of it like there’s a machine marked “utils,” and if you press the right buttons on it, it will make some utils. What I want to do is write instructions about how to use the machine. There are wrong buttons I think people should not press, and even buttons I think other people should stop you from pressing, but I don’t think the machine itself grabs your arms and makes you press the right buttons. And if you don’t press a button, the machine doesn’t throw red paint on you so people will know you’re immoral.

People say Utilitarianism is too strict, because it makes you press too many different buttons. But I think what Utilitarianism does is tell you that there are more buttons you could be pressing. And there are a lot of buttons, so realistically, nobody actually presses every single good one. It would be meaningless for the instructions to say “press every good button at all times,” because people still wouldn’t do it.

I’ve been told that I can’t write this kind of ethics, because ethics requires the concept of “you should do such-and-such.” In that case, I’d be happy to say the thing I’m writing isn’t ethics. Call it “fweeb” or whatever, because the name doesn’t matter.
feotakahari: (Default)
Story on Not Always Right about a person who advised a stranger which over-the-counter allergy meds to take. Lots of people saying it was a bad thing because you don’t know if you’ll recommend something that will hurt them.

What if you don’t recommend anything at all, and they coincidentally buy something that hurts them? Are you more or less responsible?
feotakahari: (Default)
I feel like virtue ethics is a little too flexible. I mean, someone says honesty is a virtue, and you point to a situation where honesty is unkind, and they say kindness is also a virtue. Then you point to a situation where kindness is ineffectual, and they say kindness needs to be in moderation. It doesn’t seem like there’s any possible situation where you could argue against virtue ethics and have your argument stick.
feotakahari: (Default)
There’s a story I like where a lifeboat is sinking because too many people are on it. One person organizes two others to throw people off until it stabilizes. Then, after the boat is stabilized, the two others find a guy who hid at the bottom of the boat, and they throw him off even though they don’t need to, because they like throwing people off. The one who initially organized this is charged with murder, and he doesn’t contest the charge, but he receives a lesser sentence compared to the other two.

The organizer’s actions are a trolley problem, but the court’s decision is also a trolley problem. Do you let this man go, because he wanted to prevent at least some deaths? Or do you punish him, in the hopes that fear of unavoidable punishment will prevent people from killing needlessly like the other two did? The court judges him for doing the same thing it does every day, and that’s why it can’t judge him too harshly.
feotakahari: (Default)
I’ve been thinking about thathopeyetlives. I think we’re equally strident about the same things, just in opposite directions.

I’m not going to get into too much detail about Hope, because I think he can do that more accurately himself. But my impression is that he thinks the purpose of humanity is to serve ideals. There’s a repeated pattern where someone tells him his ideals are hurting people, and he’s disgusted that they would think that’s a valid reason to abandon ideals.

I think the purpose of ideals is to serve humanity. On our own, it’s easy for us to help ourselves and our friends, but harder for us to help people we don’t know. There are many ways to help people, but there are recurring patterns to be aware of, and we can codify these patterns as ideals, like the Golden Rule or the cardinal virtues.

There’s a Terry Pratchett quote about how lives are more valuable than causes, because you can find a new cause, but you only get one life. I don’t know if he actually meant anyone to agree with the character who says it, but I do believe that life should buy life. Take suicide bombing—are your people really going to live longer or better because you’re killing yourself? Or are you just benefiting some old man in a cave who likes having power over you?

It’s not as simple as “never die for anything.” If other people are suffering or dying, there’s value in standing up. But I think it’s about the people themselves, not about something as vague as an ideal.

(Note that there’s a degree of asymmetry here. I support Hope’s desire to follow his own principles, so long as that doesn’t trample on other people. Hope opposes any values other than his own.)
feotakahari: (Default)
“I know probably better than you, given that it’s clear half the people reblogging this have no grounding in environmental ethics (sorry to be mean but jesus christ y’all at least read aldo leopold or SOMETHING).”

There’s a certain amount of salt from ethics majors who’re mad that their discipline isn’t taken as seriously as math or physics majors, but a discipline in which you can say “read this specific author or collection of authors, and you’ll understand” is flat-out worse than a discipline in which you say “read more of the latest research.” You see the same thing in economics—an order of followers builds up around a few leaders, and their research becomes attempts to prove the point of those leaders, and the leaders are wrong and the followers become wrong-squared. (Contrast evolutionary theory, another topic that produces weird fringes, but one in which researchers are overjoyed to proclaim the ways in which they think Darwin was wrong. I maintain that evolutionary theory is better in this sense than economics is.)

Yes, I realize the hypocrisy of saying this on a blog that also proclaims itself “buddies with Bentham.” The most I can say in my defense is that I will never tell you the reason you disagree with Utilitarianism is that you haven’t read enough Bentham. Ultimately, my points are my own, and if you think I’m wrong, maybe it’s because I’m wrong.
feotakahari: (Default)
Reposting from a Tumblr conversation: https://feotakahari.tumblr.com/post/184919985235/it-would-be-amusing-to-have-an-ethical-system-that

Honestly, I think Utilitarianism isn’t marketable enough to be successful. There are a lot of things that people want, ranging from the promise of Heaven to an excuse for xenophobia, that Utilitarianism doesn’t provide. In that sense, it would probably provide more utility for me to promote liberal Christianity, since liberal Christians tend to support a lot of the same actions and political premises I support.

But I don’t think I’m some higher intelligence that’s qualified to judge who’s “smart enough” to be a Utilitarian. If I believe in Utilitarianism, I feel like it’s most appropriate for me to make at least a token effort to tell others about it. Any individual person is capable of surprising me by finding value in what I’m saying, just as I found value in it.
feotakahari: (Default)
Entanglingbriars: "Most ethical systems allow for actions that go beyond the standards of baseline morality and are extra-good; in utilitarianism the standard is absolute and there's no way to exceed it which, when I tried to do utilitarianism, led me to believe that everything I did that did not actively contribute to others' wellbeing was evil. The last is more of a scrupulosity problem on my part, but it is consistent with the standards of a consequentialist ethic."

Systlin: "
Odin’s not a force to simply toy with. He will ask you do do things.  There will be work. And while I joke, it’s because I’ve earned the right to do so. I’ve walked the path. I’ve done the work he asks, and am still doing it. I’ve journeyed to places higher and lower, and faced fears I didn’t even know I had. I’ve offered sweat and blood and tears, and in return…well. 

"In return I’ve been given much. It is worth it, every bit of it and then some. 

"But what walking with Odin is not is simply an easy way to make corvid friends. That is a way he watches out for those he favors, not a fun novelty."


A god may ask you for sacrifice in return for blessings. A god may tell you that the person you are now isn't good enough. But I am not a god, and I don't particularly care what you do or don't sacrifice.

The first mistake is to think of utilitarianism as something that tells you whether a person is or isn't good. A person is that which feels happiness, and feeling happiness is good.* It's true that happiness can conflict with other happiness, and people can behave in ways that are selfish or cruel. But they're still people, and they are to be helped to the extent that helping them is possible.

The second mistake is to think that there's a level of good action below which you're "not doing enough." You're existing as a person, and that's a good enough standard for utilitarianism. Now, I won't claim to neutrality or pretend I never judge. I'm human enough to have my own likes and dislikes. But that's just me being me, and it has no higher value than me. There's no cosmic scale, above and beyond an individual person, that will tell you your actions are insufficient.

If you choose to spend a year building low-income housing, that is good. If you choose to give five dollars to a homeless person, that is good. If you choose to care for your ailing parent, that is good. Utilitarianism is something you use to determine those things, in those times when good is something you want to do. But it won't tell you what you are, and it won't rank you on a leaderboard. It's a tool, nothing more and nothing less.

*And yes, that does mean a dog is a person.
feotakahari: (Default)
This is heavily inspired by posts by @loving-not-heyting​, so I’m crossposting it to Tumblr to see her response.

One of the many reasons I’m a Utilitarian: I fundamentally do not trust any moral code that allows you to make up new rules on the spot.

A common argument against Utilitarianism is that it overcomplicates things. A Utilitarian who says that lying is wrong has to come up with an argument for why lying reduces net utility. Someone else can just say that lying is wrong, because they already know that lying is wrong. Why go to so much effort to get the same answer?

Rather than address this issue, I’d like to bring up a different one: a person who uses an old flag to clean a toilet. (This example comes from Jonathan Haidt’s The Righteous Mind.) My experience with deontologists is that they’ve got a lot of separate, seemingly unrelated moral rules already, and they’re not afraid to add new ones. They may have a rule that disrespecting symbols is wrong, or that lacking in patriotism is wrong, but if they don’t have one, and they’re sufficiently perturbed by this idea, they’ll just make up a rule that using a flag to clean a toilet is wrong. But Utilitarians, who aren’t allowed to make up new rules, have to justify how using a flag to clean a toilet reduces net utility. If they can’t come up with an argument, they have no choice but to say that it’s perfectly okay.

I could say that I’m afraid the ability to create new moral rules will be used to create a rule that being gay is wrong. This is true, but it’s not the whole picture. I’m afraid of a rule that being weird is wrong. You think BDSM is icky? Make up a rule that BDSM is morally wrong! You think someone else’s religion is strange and frightening? Make up a rule that their religion is morally wrong! It’s a bully’s view of the world, where the person who can be targeted should be targeted, and whatever punishment you enact upon them is just putting them in their proper place.

(Also benefiting from this argument: Kantian ethics, some forms of care ethics, and divine command theory, as long as God isn’t coming down to personally tell you that all the things you don’t like are heretical. Also opposed by this argument: virtue ethics, because the ability to say that some action you don’t like is inherently against a particular virtue is the philosophical equivalent of conjuring an anvil directly above your opponent’s head.)

feotakahari: (Default)
Edit: I haven't added entries to this page for a long time, but I think it's worth keeping stickied, just as an introduction.

Any blog should start off by introducing its subject, but in theory, Utilitarianism is quite simple. It’s a theory of ethics under which you should take whatever actions will lead to the greatest happiness for everyone involved. There are no other rules or restrictions–just make as many people as happy as you can make them.

So to make this more interesting, let’s go over some of the things Utilitarians DON’T have.Read more... )
feotakahari: (Default)
When a story is written to promote a way of living, there’s a very specific formula to how it will go. The protagonist encounters problems, adopts the way of living, and the problems go away because their way of living fixes everything. There’s often another character who follows an inferior way of living, trying and failing to solve the same problem, and they’ll either die or change their minds upon seeing the protagonist succeed. There are a few wrinkles in the formula–the protagonist dies saving everyone, the protagonist loses his love interest because his way of life matters more to him than love–but by and large, any story about how to be a good person and live a good life can be summed up in this way.

As a Utilitarian, I naturally have a grudge about how this tends to play out. It’s the tragic villains who tend to espouse Utilitarian values, doing something horrible because every alternative they see is worse. The heroes are the ones who refuse to accept this, then pull off a solution that saves everyone, because the person writing the story made there be a solution that saves everyone. If the Utilitarian was right, and there weren’t any other choices, then by stopping him, the heroes would be responsible for something horrible, and we can’t have that, now can we?

Of course, I can’t claim the rules work any differently for moral relativist heroes. I once read a truly vile book called The Soprano Sorceress where the protagonist murders her way onto the throne, kills thousands of civilians to stay in power, and justifies it all as being for the greater good. Apart from all the bodies she piles up, it basically works out like she planned. You can’t have your hero killing people if there was any better solution to the problem. You can’t, unless you’re Orson Scott Card and you’re writing Ender’s Game.

Read more... )
feotakahari: (Default)
 This isn’t gonna be a full Fiction for Utilitarians, because I try to do those for overlooked or forgotten fiction, and Prey is a major new release. I just feel like gushing a little.

The writers of Prey were clearly aware of Utilitarian thought. There are a lot of references to the trolley problem, and the story’s central conflict is very Utilitarian in nature. There are several points where you’re given a moral dilemma and asked to sort it out, and one of your answers is often Utilitarian in nature.

Read more... )
feotakahari: (Default)
 I once tried to read about a guy called Gilbert Ryle who called himself a logical positivist. The only part I managed to understand was that you’re a logical positivist if you’d be willing to undergo surgery with a paralytic drug, a drug that prevents you from forming memories, and no anesthetic. You'd still feel all the pain, but no one would ever be able to prove it, and no future version of yourself or others would show any sign of having seen or experienced it, so by Ryle's standard, it never happened.

I think that’s the principle the American death penalty runs under. No one cares if the dying experience horrible pain. It's just about whether they demonstrate physical signs of pain before they expire, because that might call into question the morality of killing people.
feotakahari: (Default)
 The principle of double effect is based on the idea that there is a morally relevant difference between an “intended” consequence of an act and one that is foreseen by the actor, but not calculated to achieve his motive. So, for example, the principle is invoked to consider the terror bombing of non-combatants having as its goal victory in a legitimate war morally out of bounds, while holding as ethically in bounds an act of strategic bombing that similarly harms non-combatants with foresight as a side effect of destroying a legitimate military target. 

“It wasn’t like I intended to blow up that orphanage! It just happened to be nearby!”

Seriously, someone introduce these fuckers to the concept of depraved indifference.

Profile

feotakahari: (Default)
feotakahari

July 2025

S M T W T F S
   1 2 345
6789101112
13141516171819
20212223242526
2728293031  

Syndicate

RSS Atom

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags
Page generated Jul. 4th, 2025 07:08 pm
Powered by Dreamwidth Studios