(no subject)
Jun. 1st, 2024 07:39 amArknights and ethics
Oct. 3rd, 2023 02:35 pmLooking back on my old posts
Jul. 30th, 2023 05:35 pmPeople say Utilitarianism is too strict, because it makes you press too many different buttons. But I think what Utilitarianism does is tell you that there are more buttons you could be pressing. And there are a lot of buttons, so realistically, nobody actually presses every single good one. It would be meaningless for the instructions to say “press every good button at all times,” because people still wouldn’t do it.
I’ve been told that I can’t write this kind of ethics, because ethics requires the concept of “you should do such-and-such.” In that case, I’d be happy to say the thing I’m writing isn’t ethics. Call it “fweeb” or whatever, because the name doesn’t matter.
(no subject)
Jul. 30th, 2023 05:08 pmWhat if you don’t recommend anything at all, and they coincidentally buy something that hurts them? Are you more or less responsible?
(no subject)
Feb. 1st, 2023 04:06 pmThe organizer’s actions are a trolley problem, but the court’s decision is also a trolley problem. Do you let this man go, because he wanted to prevent at least some deaths? Or do you punish him, in the hopes that fear of unavoidable punishment will prevent people from killing needlessly like the other two did? The court judges him for doing the same thing it does every day, and that’s why it can’t judge him too harshly.
(no subject)
Dec. 11th, 2019 11:53 pmI’m not going to get into too much detail about Hope, because I think he can do that more accurately himself. But my impression is that he thinks the purpose of humanity is to serve ideals. There’s a repeated pattern where someone tells him his ideals are hurting people, and he’s disgusted that they would think that’s a valid reason to abandon ideals.
I think the purpose of ideals is to serve humanity. On our own, it’s easy for us to help ourselves and our friends, but harder for us to help people we don’t know. There are many ways to help people, but there are recurring patterns to be aware of, and we can codify these patterns as ideals, like the Golden Rule or the cardinal virtues.
There’s a Terry Pratchett quote about how lives are more valuable than causes, because you can find a new cause, but you only get one life. I don’t know if he actually meant anyone to agree with the character who says it, but I do believe that life should buy life. Take suicide bombing—are your people really going to live longer or better because you’re killing yourself? Or are you just benefiting some old man in a cave who likes having power over you?
It’s not as simple as “never die for anything.” If other people are suffering or dying, there’s value in standing up. But I think it’s about the people themselves, not about something as vague as an ideal.
(Note that there’s a degree of asymmetry here. I support Hope’s desire to follow his own principles, so long as that doesn’t trample on other people. Hope opposes any values other than his own.)
There’s a certain amount of salt from ethics majors who’re mad that their discipline isn’t taken as seriously as math or physics majors, but a discipline in which you can say “read this specific author or collection of authors, and you’ll understand” is flat-out worse than a discipline in which you say “read more of the latest research.” You see the same thing in economics—an order of followers builds up around a few leaders, and their research becomes attempts to prove the point of those leaders, and the leaders are wrong and the followers become wrong-squared. (Contrast evolutionary theory, another topic that produces weird fringes, but one in which researchers are overjoyed to proclaim the ways in which they think Darwin was wrong. I maintain that evolutionary theory is better in this sense than economics is.)
Yes, I realize the hypocrisy of saying this on a blog that also proclaims itself “buddies with Bentham.” The most I can say in my defense is that I will never tell you the reason you disagree with Utilitarianism is that you haven’t read enough Bentham. Ultimately, my points are my own, and if you think I’m wrong, maybe it’s because I’m wrong.
(no subject)
May. 16th, 2019 09:27 amHonestly, I think Utilitarianism isn’t marketable enough to be successful. There are a lot of things that people want, ranging from the promise of Heaven to an excuse for xenophobia, that Utilitarianism doesn’t provide. In that sense, it would probably provide more utility for me to promote liberal Christianity, since liberal Christians tend to support a lot of the same actions and political premises I support.
But I don’t think I’m some higher intelligence that’s qualified to judge who’s “smart enough” to be a Utilitarian. If I believe in Utilitarianism, I feel like it’s most appropriate for me to make at least a token effort to tell others about it. Any individual person is capable of surprising me by finding value in what I’m saying, just as I found value in it.
Utilitarianism Is Not a God
Mar. 23rd, 2019 03:04 amSystlin: "Odin’s not a force to simply toy with. He will ask you do do things. There will be work. And while I joke, it’s because I’ve earned the right to do so. I’ve walked the path. I’ve done the work he asks, and am still doing it. I’ve journeyed to places higher and lower, and faced fears I didn’t even know I had. I’ve offered sweat and blood and tears, and in return…well.
"In return I’ve been given much. It is worth it, every bit of it and then some.
"But what walking with Odin is not is simply an easy way to make corvid friends. That is a way he watches out for those he favors, not a fun novelty."
A god may ask you for sacrifice in return for blessings. A god may tell you that the person you are now isn't good enough. But I am not a god, and I don't particularly care what you do or don't sacrifice.
The first mistake is to think of utilitarianism as something that tells you whether a person is or isn't good. A person is that which feels happiness, and feeling happiness is good.* It's true that happiness can conflict with other happiness, and people can behave in ways that are selfish or cruel. But they're still people, and they are to be helped to the extent that helping them is possible.
The second mistake is to think that there's a level of good action below which you're "not doing enough." You're existing as a person, and that's a good enough standard for utilitarianism. Now, I won't claim to neutrality or pretend I never judge. I'm human enough to have my own likes and dislikes. But that's just me being me, and it has no higher value than me. There's no cosmic scale, above and beyond an individual person, that will tell you your actions are insufficient.
If you choose to spend a year building low-income housing, that is good. If you choose to give five dollars to a homeless person, that is good. If you choose to care for your ailing parent, that is good. Utilitarianism is something you use to determine those things, in those times when good is something you want to do. But it won't tell you what you are, and it won't rank you on a leaderboard. It's a tool, nothing more and nothing less.
*And yes, that does mean a dog is a person.
The Ever-Growing Rulebook
Mar. 8th, 2019 01:02 amOne of the many reasons I’m a Utilitarian: I fundamentally do not trust any moral code that allows you to make up new rules on the spot.
A common argument against Utilitarianism is that it overcomplicates things. A Utilitarian who says that lying is wrong has to come up with an argument for why lying reduces net utility. Someone else can just say that lying is wrong, because they already know that lying is wrong. Why go to so much effort to get the same answer?
Rather than address this issue, I’d like to bring up a different one: a person who uses an old flag to clean a toilet. (This example comes from Jonathan Haidt’s The Righteous Mind.) My experience with deontologists is that they’ve got a lot of separate, seemingly unrelated moral rules already, and they’re not afraid to add new ones. They may have a rule that disrespecting symbols is wrong, or that lacking in patriotism is wrong, but if they don’t have one, and they’re sufficiently perturbed by this idea, they’ll just make up a rule that using a flag to clean a toilet is wrong. But Utilitarians, who aren’t allowed to make up new rules, have to justify how using a flag to clean a toilet reduces net utility. If they can’t come up with an argument, they have no choice but to say that it’s perfectly okay.
I could say that I’m afraid the ability to create new moral rules will be used to create a rule that being gay is wrong. This is true, but it’s not the whole picture. I’m afraid of a rule that being weird is wrong. You think BDSM is icky? Make up a rule that BDSM is morally wrong! You think someone else’s religion is strange and frightening? Make up a rule that their religion is morally wrong! It’s a bully’s view of the world, where the person who can be targeted should be targeted, and whatever punishment you enact upon them is just putting them in their proper place.
(Also benefiting from this argument: Kantian ethics, some forms of care ethics, and divine command theory, as long as God isn’t coming down to personally tell you that all the things you don’t like are heretical. Also opposed by this argument: virtue ethics, because the ability to say that some action you don’t like is inherently against a particular virtue is the philosophical equivalent of conjuring an anvil directly above your opponent’s head.)
Any blog should start off by introducing its subject, but in theory, Utilitarianism is quite simple. It’s a theory of ethics under which you should take whatever actions will lead to the greatest happiness for everyone involved. There are no other rules or restrictions–just make as many people as happy as you can make them.
So to make this more interesting, let’s go over some of the things Utilitarians DON’T have.( Read more... )
Fiction for Utilitarians: Ender's Game
Dec. 8th, 2018 06:09 pmAs a Utilitarian, I naturally have a grudge about how this tends to play out. It’s the tragic villains who tend to espouse Utilitarian values, doing something horrible because every alternative they see is worse. The heroes are the ones who refuse to accept this, then pull off a solution that saves everyone, because the person writing the story made there be a solution that saves everyone. If the Utilitarian was right, and there weren’t any other choices, then by stopping him, the heroes would be responsible for something horrible, and we can’t have that, now can we?
Of course, I can’t claim the rules work any differently for moral relativist heroes. I once read a truly vile book called The Soprano Sorceress where the protagonist murders her way onto the throne, kills thousands of civilians to stay in power, and justifies it all as being for the greater good. Apart from all the bodies she piles up, it basically works out like she planned. You can’t have your hero killing people if there was any better solution to the problem. You can’t, unless you’re Orson Scott Card and you’re writing Ender’s Game.
( Read more... )
The writers of Prey were clearly aware of Utilitarian thought. There are a lot of references to the trolley problem, and the story’s central conflict is very Utilitarian in nature. There are several points where you’re given a moral dilemma and asked to sort it out, and one of your answers is often Utilitarian in nature.
( Read more... )More Executions? Are You Positive?
Dec. 8th, 2018 06:00 pmI think that’s the principle the American death penalty runs under. No one cares if the dying experience horrible pain. It's just about whether they demonstrate physical signs of pain before they expire, because that might call into question the morality of killing people.
It's Double Effective!
Dec. 8th, 2018 05:57 pm“It wasn’t like I intended to blow up that orphanage! It just happened to be nearby!”
Seriously, someone introduce these fuckers to the concept of depraved indifference.