Utilitarianism - The Only Philosophical School Worth Shit

AnOminous

SOMEBODY SET UP US THE BOMB
Retired Staff
True & Honest Fan
kiwifarms.net
Joined
Dec 28, 2014
Some fag here tried to start a thread on deontological Kantian ethics and this obviously failed because nobody cares about that shit.

However, there is one school of philosophy that can, if adopted, actually have real, positive impact in the world. That's utilitarianism.

It is the seemingly obvious idea that philosophy should actually be useful and that ideas are good, in a somewhat, ironically Platonic sense, if they would actually have positive results, would actually improve humanity, if adopted.

The hero of utilitarianism, if there is one, would be the philosopher Jeremy Bentham. He's an awesome fucking dude, and just to piss people off, he had his preserved corpse put on permanent display at UC London.

Utilitarianism.

The idea that ideas should be useful.

Anyone who disagrees is literally fucking retarded. Discuss, tards.
 
You present this as if you're going against the tide, but utilitarianism is an extremely popular school of thought. It's certainly much more popular than Kantian deontology. From what I've seen it's generally trumpeted the hardest by either libertardians or commies.

Most people broadly agree with the basic tenet that things are good that which bring pleasure or happiness or benefit people more than they harm or are unpleasurable, the general concept of utilitarianism, but applying it in the most hardline fashion leads to interesting corner cases. Eliezer Yudkowsky insinuates that having an exponent tower's worth of people get dust in their eyes is more harmful to humanity overall than one person getting tortured to death.

There's also the question of whether net happiness cancels out all other factors that can possibly be taken into consideration. A question I once heard working to this effect is whether it would be morally acceptable to rape a woman who would die in an hour and was in a coma if no one would know you did it, because it would be pleasurable for you and she'd never be the wiser at all.
 
I think that there are several fundamental problems

First is how the utility of individuals is defined
Second is whose utility is actually relevant
Third is how is the utility of groups summed up
Fourth is the "god problem"
Fifth is the question of motivation
Although it might seem like a simple thing to define utility: utility is how happy someone is. But the reality is quite different because we are defining utility as a sort of end consequence of an action.

Sometimes a simple number corresponding to utility is given but this is an extreme oversimplification and ignores the actual method of deriving it

One possible way is to treat utility as something that is present in every moment and then take an improper integral of it as time approaches infinity (or a definite integral from present to death)

Alternatively it could be seen as something that is simply a consequence and thus does not need to be integrated or it could be seen as something that varies through time but only the last value is significant. This raises the question of when it is measured though.


Also can there be a nengative utility or is utility either positive or zero


It is unclear how this number is derived but unless this problem is solved it will cause substantial problems for the theory



Additionally there is the problem of what is utility derived from regardless of how the number comes from it
Is utility just pleasure or is it something else entirely. If it is pleasure then there are several things such as stealing from someone when they don't know it or being on drugs all the time (in order to get pleasure) that can be seen as good that many would see as problematic.

This is even more significant if we consider non humans to have utility
Another problem of utilitarianism is that it is unclear as to who the ethical system covers. Does it cover future persons for example is important because if it does then there would be an imperative to produce more persons so they could be happy too. Does it non humans and if so then how would their utility compare with human utility? Would one need to take into account the happiness of insects, bacteria? What can justify such limitations?
How does one take a set of utility values for a group and then aggregate them into one value? Should we just add them all up? Should we take the median or the mean or the mode? What about more exotic methods such as having the aggregate value being defined as
e^((\sum_{1}^{n}logx_i)/n)
LdJAKJ4.gif


If it just involves adding then we can end up in a scenario where population maximization is utility maximization.
If it is just based on average then the classical solution is to kill unhappy people to raise the average happiness (this can be resolved through treating dead people as having zero utility and there not being negative utility though)
If it is based on median then one is free to do whatever one wants to persons of high or low utility
If it is based off of mode then again the persons in the mode are the only ones who actually matter

likewise even if this is resolved then using the mean median or mode doesn't distinguish between a single individual being the population vs a whole planet as far as quality of outcome is concerned.

Additionally if there is a person who gains disproportionate utility from something then there would be a moral obligation to give them everything at the expense of all others unless median or mode utilitarianism is in play
There is a potential problem in utilitarianism associated with it creating a lack of ability to interact with others. It seems on the surface to prohibit things such as punishment and the like. Although punishment could be justified as deterrence there are several actions which utilitarianism would promote that are inconsistent with facilitating social interation. They include things like an obligation to as a seller of a drug to take the money of the person who pays the most for the drug (more money is good for you) and then give them a fake and give the real drug to the person who will be missed the most if they die (best for them and people who know them). Some consider this to be duty related but I think it can be better described as being fundamentally incompatible with actual human social interaction and would only work if everyone were a utilitarian. Some of this is resolved by rule utilitarianism but in many ways the usage of aggregate wellbeing doesn't make much sense in a world with active conflict (as opposed to simply differently preferred allocations)
The final problem with utilitarianism is that it doesn't provide sufficient justification for an individual to practice it. For an impartial "god" it may make sense but there isn't enough justification for an individual with preferences of their own to act in a utilitarian manner. If they like such an outcome then they can pursue it on their own but that would result in infinite regress because they would have to take into account their own preference for utilitarianism.

Excluding a personal preference for utilitarianism of which it is debatable whether it is even true utilitarianism I see no justification for utilitarianism. I believe that the burden of proof lies on the utilitarian in order to give support for acting in that way and that russel's teapot applies here

This is one of the worst deep thoughts OP's I have seen. There was no actual description of what utilitarianism was and only a link to a historical figure associated with it as well as a highly obnoxious OP. I am disappointed in you @AnOminous and I thought that a mod would give a better OP
 
Goodideaism - The philosophy that ideas should be good. Why doesn't everyone follow this philosophy??? Are they fucking retarded???

I expressed my ideas pretty stupidly.

It's actually somewhat the opposite.

Utilitarianism is not the idea that ideas should be, somehow, inherently good. That's actually deontological ethics.

Utilitarianism is the idea that ideas should have good RESULTS.
 
I expressed my ideas pretty stupidly.

It's actually somewhat the opposite.

Utilitarianism is not the idea that ideas should be, somehow, inherently good. That's actually deontological ethics.

Utilitarianism is the idea that ideas should have good RESULTS.
An actual useful way of explaining utilitarianism is to describe it as:

The philosophical position that one is obligated to act in the way that maximizes net utility, that is the good for every person summed up minus the bad for every person summed up

Ideas have nothing to to with utilitarianism which is all about actions
 
I think that during the time of Jeremy Bentham people thought less about the big picture than they do now so Utilitarianism appeared to be better because it only included people in the local area as opposed to Syrian refugees or microorganisms
 
Some fag here tried to start a thread on deontological Kantian ethics and this obviously failed because nobody cares about that shit.

However, there is one school of philosophy that can, if adopted, actually have real, positive impact in the world. That's utilitarianism.

It is the seemingly obvious idea that philosophy should actually be useful and that ideas are good, in a somewhat, ironically Platonic sense, if they would actually have positive results, would actually improve humanity, if adopted.

The hero of utilitarianism, if there is one, would be the philosopher Jeremy Bentham. He's an awesome fucking dude, and just to piss people off, he had his preserved corpse put on permanent display at UC London.

Utilitarianism.

The idea that ideas should be useful.

Anyone who disagrees is literally fucking retarded. Discuss, tards.
I'd say that characterizing Utilitarianism as "the idea that ideas should be useful" is somewhat misleading, since we obviously need to ask "useful in what way".

A Kantian would likely say that ideas are useful insofar as they are grounded in rationality and the correct account of what it is to be a human being.

An Aristotelian would say that ideas are useful insofar as they contribute to becoming a flourishing agent (i.e. help you live a complete human life).

Utilitarians could be distinguished by saying that ideas are useful insofar as they promote net human happiness/pleasure.

While you may well find the Utilitarian account most compelling (as many do), it is obviously a far more controversial position than just saying ideas ought to be useful.
 
Let's say I'm a Doctor in the ER. I have @CatParty whose brain rotted away from too much shitposting. @Melchett needs a lung implant from smoking too much crack. @FramerGirl420 needs a new liver from chugging too much kraken rum. @Null rotted his stomach away from drinking too much coke zer0. @KatsuKitty needs a new rectum from taking it up in the ass. @Ariel needs a new spine from her broad shoulders crushing them. @Randy Lahey had a heart attack from eating too many cheeseburgers. @*Asterisk* needs a new penis due to a botched circumcision. and so on and so on... I'm having a bad day. A strict interpretation of utilitarianism would tell me it's morally justified to go kill @AnOminous, who is in the emergency room because he hurt his wrist from posting too much but is otherwise fine in order harvest his organs to save these people.

Or, as an article about transhumanism and utilitarianism being incompatible that I can no longer find on the internet pointed out:

Suppose there is a doctor about to perform an operation on a female patient. The patient is unconscious; under general anesthetic. Suppose that the doctor is considering raping the woman. And the doctor is assured that he can: make sure he does it without physically harming her and that no one will ever find out about it, apart from him, not even the woman. Suppose he also knows that both he and the woman are healthy; they do not have any sexually transmitted infections and the doctors knows that the woman is infertile, so she will not be able to become pregnant. In this situation the doctor is considering the moral status of the act of raping this woman. What does utilitarianism tell him? Clearly, no one is going to ever suffer. That is assured. And he, the doctor, will derive pleasure from the act. Therefore, not only does utilitarianism tells us that the act of raping the patient is not immoral, it tells us that, in fact, the act is positively moral because it increases the happiness of the doctor. The doctor, under utilitarianism, would actually be morally obligated to rape this woman. Take a few moments to think about that. No thinking person with any modicum of human decency can base their morality off such a disgusting, abominable system of thought that can allow for situations where rape is a moral obligation.

Even though you may think these two scenarios sound outlandish, the idea that "the ends justify the means" is what enabled Stalin, Mao, Hitler, etc. to commit heinous atrocities while claiming that by doing so, they would create a utopian society that would be great for the general population.

This is why Utilitarianism needs some sort of side constraint on it to prevent these things from happening. The ends do not justify the means. One interpretation that I liked was John Rawls' "Veil of Ignorance" which, to quote from "A Theory of Justice"

The idea of the original position is to set up a fair procedure so that any principles agreed will be just. The aim is to use the notion of procedural justice as a basis of theory. Somehow we must nullify the effects of specific contingencies which put men at odds and tempt to exploit social and natural circumstances to their own advantage. Now in order to do this I assume that the parties are situated behind a veil of ignorance. They do not know how the various alternatives will affect their own particular case and they are obliged to evaluate principles solely on the basis of general considerations. It is assumed, then, that the parties do not know certain kinds of particular facts. First of all, no one knows his place in society, his class position or social status; nor does he know his fortunate in the distribution of natural assets and abilities, his intelligence and strength, and the like. Nor, again,does anyone know his conception of the good, the particulars of his rational plan of life, or even the special features of his psychology such as his aversion to risk or liability to optimism or pessimism. More than this, I assume that the parties do not know the particular circumstances of their own society. That is, they do not know its economic or political situation, or the level of civilization and culture it has been able to achieve. The persons in the original positions have no information as to which generation they belong. These broader restrictions on knowledge are appropriate in part because questions of social justice arise between appropriate rate of capital saving and of conservation of natural resources and environment of nature. There is also, theoretically anyway, the question of a reasonable genetic policy. In these cases too, in order to carry through the idea of the original position, the parties must not know the contingencies that set them in opposition. They must choose principles the consequences of which they are prepared to live with whatever generation they turn out to belong to.

With this, you wouldn't know who you were in the aforementioned scenarios that I provided as the faults of pure utilitarianism. Therefore, you wouldn't want to harm the innocent person because you may be that innocent person, and thus you would not want to bring harm upon innocent or marginalized people.
 
Last edited:
Utilitarians could be distinguished by saying that ideas are useful insofar as they promote net human happiness/pleasure.

Correct. Specifically, that ideas or ethics when actually acted upon would tend to lead to a greater satisfaction of preferences.

This seems a more useful formulation because there is no real way of measuring happiness or pleasure, and for that matter, there are probably a substantial number of people who do not prefer hedonism but measure their satisfaction in some other way, which might even involve foregoing happiness in lieu of personal pride or loyalty to others.

There clearly has to be some hierarchy of preferences, though, because for instance, there are preferences that might be worthless or harmful, such as for instance the pleasure of a serial killer in murdering a victim. Satisfying such a preference clearly loses when balanced against the fact that the victims would clearly prefer to go on living.

This is why killing is rarely ethical, as it permanently ends the ability of that person to satisfy their preferences.

Interestingly, the best way to ensure that preferences are satisfied across a society is through legal frameworks. Kantian ethics leads to this conclusion as well, but the starting premise is different, as rules/laws are seen as a goal in and of themselves, and behavior is correct and virtuous specifically because it adheres to some objective rule, which is itself correct and virtuous.

By comparison, rules/laws are seen as good from a utilitarian perspective because people are more able to pursue preferences, happiness or otherwise, in an orderly society with predictable outcomes ensured by a sound framework for behavior. In a rule-free society of chaos and unpredictability, one spends too much time reacting to immediate threats to pursue personal satisfaction.

So the society that makes it reasonably simple to satisfy those lower levels of Maslow's hierarchy of needs, by having rules, is more likely to have happier people.
 
Correct. Specifically, that ideas or ethics when actually acted upon would tend to lead to a greater satisfaction of preferences.

This seems a more useful formulation because there is no real way of measuring happiness or pleasure, and for that matter, there are probably a substantial number of people who do not prefer hedonism but measure their satisfaction in some other way, which might even involve foregoing happiness in lieu of personal pride or loyalty to others.

There clearly has to be some hierarchy of preferences, though, because for instance, there are preferences that might be worthless or harmful, such as for instance the pleasure of a serial killer in murdering a victim. Satisfying such a preference clearly loses when balanced against the fact that the victims would clearly prefer to go on living.

This is why killing is rarely ethical, as it permanently ends the ability of that person to satisfy their preferences.

Interestingly, the best way to ensure that preferences are satisfied across a society is through legal frameworks. Kantian ethics leads to this conclusion as well, but the starting premise is different, as rules/laws are seen as a goal in and of themselves, and behavior is correct and virtuous specifically because it adheres to some objective rule, which is itself correct and virtuous.

By comparison, rules/laws are seen as good from a utilitarian perspective because people are more able to pursue preferences, happiness or otherwise, in an orderly society with predictable outcomes ensured by a sound framework for behavior. In a rule-free society of chaos and unpredictability, one spends too much time reacting to immediate threats to pursue personal satisfaction.

So the society that makes it reasonably simple to satisfy those lower levels of Maslow's hierarchy of needs, by having rules, is more likely to have happier people.
Oh I don't really disagree that utilitarianism is a useful and deeply plausible approach.

My main gripe is, I guess, that there seems to be something deeply right in what the Kantians and Aristotelians are saying as well. We do, like Kantians, seem to think of morality as consisting in part of rational, exceptionless rules regarding behavior. We also, like Aristotleians, seem to think that morality consists at least in part in the acquisition of certain traits and the fulfillment of certain roles.

This might just be my empiricism talking, but it seems to me any satisfactory ethical theory needs to be able to account for the phemenological character of moral decision making, our moral intuitions, and the plausibility of competing moral theories. On balance, I think Virtue ethics does the best job in this regard, since to be the right sort of agent involves following reason and rational rule setting (like Kantians), involves considering the consequences of one's acts (like Utilitarians), and involves attempting to emulate something like a platonic ideal of the good.

My gripe is, I guess, not so much that utilitarianism is implausible or fundamentally missing the point; it is that it fails to explain what it is about morality that leads us to make the kinds of mistakes we make.
 
Last edited:
  • Agree
Reactions: autisticdragonkin
My main gripe is, I guess, that there seems to be something deeply right in what the Kantians and Aristotelians are saying as well. We do, like Kantians, seem to think of morality as consisting in part of rational, exceptionless rules regarding behavior. We also, like Aristotleians, seem to think that morality consists at least in part in the acquisition of certain traits and the fulfillment of certain roles.

It's very compelling, and in certain cases, such considerations have to be taken into consideration when framing social policies. For instance, if any substantial part of the society hold such beliefs passionately, then imposing some framework, top-down, that the majority doesn't accept is going to lead to perceived illegitimacy and any presumed advantage to doing so stands a chance of being wiped out by the backlash.

Then you lead to the situation where the goal is seen as so desirable that the solution is simply to get rid of those people. This may "work," even in the long run, although it usually doesn't. If you consider China moving from an impoverished, backward society continually exploited by every great power in the world working in tandem to a modern, industrial, horribly polluted, half-totalitarian dystopia in a few decades an improvement, that would be an example.

However, if you still reject any outcome like that, it is easy enough to reject any policies that kill innocent people, or at least result in an unacceptable level of deaths. (For instance, despite the distaste for "putting a price tag on a life," anyone who designs highways does this. They have a limited amount of money for safe design, and while there are all kinds of safety devices that can be installed, we don't simply spend every possible dollar on every conceivable device. That's essentially deciding that some level of expense to reduce accidental deaths marginally is just not worth it.)

In any event, you can simply assign a very high value to the entirety of someone's life, and deliberately wiping out any chance of them ever satisfying them would be seen as unacceptable. Doing that to millions of people would be abominable under such a system regardless of the good on the other side.

Your utility monsters will, of course, see what's on the other side as so utterly wonderful that it balances it out, though. They're monsters, after all. They'd be monsters if their reason was "Deus Vult" too, though, and we've had plenty of those.

The doctor, under utilitarianism, would actually be morally obligated to rape this woman. Take a few moments to think about that. No thinking person with any modicum of human decency can base their morality off such a disgusting, abominable system of thought that can allow for situations where rape is a moral obligation.

There are a number of ways around this. The preference to rape or murder is one that is worthy of being given no weight, so there's no paradox here as doing it could be of no possible benefit.

Further, regardless of whether the doctor would ever be caught, he would himself know what he had done and if he had any decency, his self-image would be permanently destroyed. He would know he was a rapist. Utilitarianism doesn't mean treating people as if they were mindless jackoff apes whose highest goal was momentary physical pleasure.

Finally, it's a contrived scenario that could never occur in reality, as one can never in reality be certain of the outcomes of one's actions. So who cares? That's never going to happen anyway.
 
There are a number of ways around this. The preference to rape or murder is one that is worthy of being given no weight, so there's no paradox here as doing it could be of no possible benefit.

Further, regardless of whether the doctor would ever be caught, he would himself know what he had done and if he had any decency, his self-image would be permanently destroyed. He would know he was a rapist. Utilitarianism doesn't mean treating people as if they were mindless jackoff apes whose highest goal was momentary physical pleasure.

Finally, it's a contrived scenario that could never occur in reality, as one can never in reality be certain of the outcomes of one's actions. So who cares? That's never going to happen anyway.
I think that this is a major thing that many critics of ethical theories forget. That despite the possibility for absurd scenarios they almost never come up. Also preference utilitarianism can say that the woman still has a preference to not be raped even after death.

Your utility monsters will, of course, see what's on the other side as so utterly wonderful that it balances it out, though. They're monsters, after all. They'd be monsters if their reason was "Deus Vult" too, though, and we've had plenty of those.
I don't understand what you are saying here.

I think that average vs total utilitarianism does not have the problem of killing everyone in preference utilitarianism and if one ignores future persons (but treats contemporary humans as though they have a preference for certain scenarios in the future) then it works for either one

I really think that the only problems with utilitarianism are problems in creating welfare functions (of which utility monsters are the main example because average vs total is not important) and the death of god
 
I really think that the only problems with utilitarianism are problems in creating welfare functions (of which utility monsters are the main example because average vs total is not important) and the death of god

I misused the utility monster scenario and what I was actually talking about was the totalitarian utilitarian.

I haven't really thought about that scenario because I don't really believe such things exist or, if they do, should even be taken into account. I'd treat claims of such as similar to the claims of a serial killer that they derive tremendous enjoyment from murder, i.e., as morally irrelevant.
 
Though the title of this thread indicates its subject is utilitarianism, it appears some of us are actually discussing the broader view of consequentialism. A utilitarian is a consequentialist, but a consequentialist is not necessarily a utilitarian.

For an example, see the following take:
Utilitarianism is the idea that ideas should have good RESULTS.

And compare this definition of utilitarianism to each of the following excerpts from the Stanford Encyclopedia of Philosophy:

Consequentialism, as its name suggests, is simply the view that normative properties depend only on consequences. This historically important and still popular theory embodies the basic intuition that what is best or right is whatever makes the world best in the future... (Source)

Utilitarianism is generally held to be the view that the morally right action is the action that produces the most good...On the utilitarian view one ought to maximize the overall good — that is, consider the good of others as well as one's own good. (Source)

The distinction I mean to highlight is that the utilitarian calculus of "the good" should consider the good of others. A consequentialist calculus may or may not consider other beings.

With that said, I came here to discuss utilitarianism.

Derek Parfit's Repugnant Conclusion is my favorite attack on the utilitarian worldview. If you're interested, check out its Stanford Encyclopedia of Philosophy's entry.

A short version of the argument is basically this:
  1. You start with a world with a population of, say, 100 people, and their lives are absolutely fantastic.
  2. Adding another person to this world marginally decreases the average happiness of its inhabitants, but only by a tiny amount (say 0.01%).
  3. Because you're a utilitarian and you're summing together every person's happiness, you agree that it's better to have 101 people than 100. The average happiness of the 101 may be slightly less than the average happiness when the population was 100, but the world's "net utility" is increased by the addition of another person.
  4. Do this enough times, and eventually you end up with a world filled with people whose lives are just barely worth living. Intuitively, this sounds shitty (i.e., repugnant), but because you're evaluating this world using a utilitarian framework, you consider this world better than one with fewer people and higher average happiness.
 
Do this enough times, and eventually you end up with a world filled with people whose lives are just barely worth living. Intuitively, this sounds shitty (i.e., repugnant), but because you're evaluating this world using a utilitarian framework, you consider this world better than one with fewer people and higher average happiness.
This assumes that adding more people will necessarily reduce the happiness of the existing population. There's no reason to make that assumption; happiness is not zero sum.
 
This assumes that adding more people will necessarily reduce the happiness of the existing population. There's no reason to make that assumption; happiness is not zero sum.
That's right - it is an assumption, and if you don't accept that assumption then the rest doesn't necessarily make sense. I think it's a reasonable assumption, though, because (many) resources are zero-sum. For example: a world with 800 billion people would be a less happy one than our world of 8 billion people.

To use a barn as an example of a smaller "world": Imagine you start with one family of 4 chickens living in the barn and the barn's dimensions are 20m wide by 20m long x 5m high. If you keep adding chickens, you'll eventually end up with a a barn looking like one of those PETA videos about factory farming. If you keep adding more without stopping, you'll end up with a barn so densely packed it's a black hole.
 
Back