[ / / / / / / / / / / / / / ] [ dir / animu / hikki / imouto / rel / senran / shota / strek / v4c ]

/philosophy/ - Philosophy

Start with the Greeks
Name
Email
Subject
Comment *
File
Flag *
* = required field[▶ Show post options & limits]
Confused? See the FAQ.
Embed
(replaces files and can be used instead)
Options
Password (For file and post deletion.)

Allowed file types:jpg, jpeg, gif, png, webm, mp4, pdf, pdf
Max filesize is 16 MB.
Max image dimensions are 15000 x 15000.
You may upload 5 per post.


Sister Boards [ Literature ] [ History ] [ Christ ] [ Religion ] [ Politics ]

YouTube embed. Click thumbnail to play.

cddae2 No.2323

I have lately been thinking quite alot about ethics and wether or not it can be said to exist. I have come to know about the ethical theory called Utilitarianism. Utilitarianism as I have been told about it says that the morally right action is the action which leads to the most happiness. Utilitarianism seems like the perfect ethical theory, my argument for thinking this goes something like this:

When I feel negative emotions I experience that as objectively bad for me. It is in a sense something objectively negative in the universe that consists of (is made up of) my thoughts and experiences. I exist in the actual universe (that outside of me) and so this objectively negative experience for me becomes objectively negative in the universe as a whole. Seeing as I am part of the Universe. Likewise positive emotions for me or anyone else becomes something objectively good in the universe. From this we can justify good feelings/experiences as objectively good (no matter who, or what experiences), and negative feelings/experiences as objectively bad.

I cannot find a really good counterargument to this. Initially I found moral nihilism the most rational alternative, but now I am not so sure. So my questions are:

Is my argument valid, if not why? Is utilitarianism the most rational ethical theory? If so, for what reason? If not why, and what is the most rational ethical theory ?

I also have another minor question. I often see philosophers trying to investigate ethics using thought experiments. They posit some situation and conclude on the optimal action according to some ethical theory. If the action that the ethical theory recommends seems counterintuitative or evil they will question the ethical theory, but is this a valid approach ? It would seem to me that one should investigate ethics based on reason and not let ones feelings interfer, if an ethical theory arrived at by rational argument should reccommend actions we do not like, we should follow it anyway and only question the theory based on rational arguments.

cddae2 No.2325

Imo, there's nothing special about the greatest number. One Shakespeare > Ten plebs.

You might like this:

https://8ch.net/philosophy/res/349.html

(I'm a pleb myself so idk how to link to other threads).


cddae2 No.2326

>>2325

forgot to change name. like I said. i'm a pleb-lord.


cddae2 No.2329

>>2325

I agree, theres nothing special about the greatest number. What the theory in my post would suggest however is to save the beings who would be most likely to experience the greatest happiness in the remainder of their lives.

If this means killing thousands of ordinary people to save one super-happy-being it is alright only as long as the outcome gives more net happiness.


cddae2 No.2332

>>2329

How would you even measure, that is dangerous philosophy. I would be more happy as three people if you died, and it would be worth it because more happiness is manifested then lost. First of all, how do I prove it, second of all is happiness in the context include other forms of comfort, like simple contentment?


cddae2 No.2333

>>2329

Ah, I see. But I still don't think outright happiness is the best measuring stick for ethics tbh, nor do I understand how you would go about objectively calculating it.

I like Nietzsche and Seneca's statements about how pain and trials are necessary for a 'virtuous' life.


cddae2 No.2334

How do I know, that I know, if I assume that I know something? and if I don't know if I know anything, how can I answer the question?

>also

I can also claim that it is not real, and that claim could also be objectively true if I believed it. What is true is kinda, " whatever " the question is it a proper means to an end? Philosophy is a means to an end

https://en.wikipedia.org/wiki/Quietism_%28philosophy%29


cddae2 No.2336

>Can Utilitarianism be justified solely on the grounds of my pleasure and displeasure?

If "pleasure" or utility is conflated with "good", yes. That is in fact the reason why Utilitarianism first arose.

But whether or not any pleasing action is " good", or even pleasurable in the long run, isn't known. And proceeding as if it was might give rise to vices, and whose evil is proportional to the degree by which present pleasures exceed or miss the good created by any action. This is a point made by Aristotle in the Nicomachean Ethics. A similar point is made by Nietzsche, without reference to good, but to "nobility" and self-mastery.

>Can the correctness particular systems of Ethics be judged according to their intuitiveness?

If you believe tha the answer to the previous question is "yes", then "yes", since you were doing exactly that.

Plus, system of ethics come from somewhere. They don't appear ex nihilo, but are rather a systematization of what is customary (this is in fact the original meaning of the words "ethos" and "mores") in some society.

Why you took so many lines to ask these two questions is beyond me.


cddae2 No.2337

OP here.

>>2332

Imagine that a basic unit of happiness-suffering existed, lets call this unit U. Here positive values of U would be equivalent to positive emotions, the more positive the greater positive emotions. Negative values means experiencing negative emotions, the more negative values the more negative emotional experience. U = 0 means a neutral emotion. Every animal can experience +/- a few U's. U's can be "transferred" from creature to creature assuming the "recipient" is capable of experiencing the "incoming" amount of U's. What I mean by that is that for example a frog can experience 2 U's, a Man can also experience 2 U's, Who experiences the U's does not matter. A man can however experience far greater values of U both positive and negative than a frog, this is because their central nervous systems are not equally sensitive and complex. U's cannot actually be transferred between beings unless we had a brain to brain interface that could transport emotions across to another person, the existence of such an interface is not relevant for this argument.

I posit that the argument in my main post, if valid and accepted must lead to this ethical behaviour/doctrine:

Always choose the action that gives the maximum amount of positive U's and the minumum amount of negative U's, such that the net total of U's across all beings past and future becomes greatest possible.

This is not meant to say that we should actually search for this unit U, and try to measure it. Nor do I mean to say that we can always know what action gives the maximum amount of U's. Rather we should pretend that U's exist and always choose the action most likely to yield the maximum amount of U's.

>I would be more happy as three people if >you died, and it would be worth it >because more happiness is manifested >then lost. First of all, how do I prove it

I do not know what you mean by that.

>second of all is happiness in the context >include other forms of comfort, like >simple contentment?

Yes, I can experience a small amount of positive U's when being well rested, a larger amount when orgasming, and a larger amount still when becoming the world champion of something. The point is to maximize U's no matter what actual feelings are present.

>>2333

The point is not to never experience negative U's, but to maximize the net total of U's. If some amount of negative U's is necessary for maximizing the sum of U's over a persons lifetime then experiencing the negative U's is the right thing to do. For example, If working during the weekdays is necessary to experience friday as truly positive, ie: experiencing -1 U for every one of the five workdays and experiencing 10 U for Friday night when you get off work. This would be better than never working on the weekdays, but not experiencing Friday as especially positive, say experiencing 0.5 U's each of the 5 days. The first option is here better because it makes you a total of 10 U - 5 * 1 U = 5 U, while the second option makes you only a total of 0.5 U * 5 = 2.5 U.


cddae2 No.2338

>>2336

I did not express myself as clearly as I should have, it seems. I did not ask whether or not Utilitarianism COULD be justified in this way, rather I (wanted) to ask, is it not the most rational system of ethics to justify good in this way, ie as is done in Utilitarianism.

Also, While it is true that we cannot know if a pleasing action is "good" in the long run, we can make "predictions" as to whether or not this is likely.

I did not wonder about strictly speaking the intuitiveness of ethics theories, but whether or not rejecting them based on what actions they recommend rather than what the theoretical, rational arguments in favor of them are, is a valid approach. Should one not attack ethics theories purely with arguments ? Not thought experiments ?

It is true that systems of ethics do not appear ex nihilo, one can argue for them, and should one not choose the systems for which the arguments are best ?


cddae2 No.2343

>>2338

>While it is true that we cannot know if a pleasing action is "good" in the long run, we can make "predictions" as to whether or not this is likely.

Consider whether there's any difference between belief in a predicted value of X, and belief that an unknown process X outputs a particular value x. The fundamental problem still stands, the real value still stands. Conceding that there is a problem would necessitate a shift from trying to maximize benefit to developing behaviors that consistently maximized benefits, which is effectively a form of virtue ethics.

This point can be put in more technical terms by referencing the problem of "rational" and "hyperbolic" discount rates. Economics is a branch of applied mathematics which was greatly influenced by moral philosophy, mainly through Adam Smith, Jeremy Bentham and John Stuart Mill. Rational agents, when presented with an inter-temporal optimization problem with unknown delay between choice and reward will consistently make errors, and their behavior will shift from a rational discounting function to a hyperbolic discounting function, the latter being more present biased.

And so, for example, a person who has borrowed money in order to buy a car might feel, as the payment date closes in, the temptation of defaulting on his debt, though this might be bad to him later

on. Or keeping a promise made might seem increasingly hard, or a good deed made might seem in retrospect a foppish affectation. The only way to bypass this problem (effectively akrasia) is by developing heuristics which maintain consistently good behavior, and outright defining how much I should reward or indulge myself.

As for your questions, I don't see how utilitarianism would be the "most rational" system of ethics. All systems of ethics have problems, because they either lack some features which would be desirable or because they have some unintended consequences. If there was such a thing as a "best" system (without any failures), it would have been certainly declared to be such by now. While one might believe that there some people might be mistaken about ethical beliefs or might remain unconvinced by demonstrations, it's absurd to believe that 2500 years of philosophical tradition rest entirely on mistakes and equivocation.

Classical utilitarianism, which allows interpersonal utility comparisons, completely misses human dignity and could possibly justify all sorts of cruelties so long as they produce a net utility increase. Deontology always struggle when choices imply a necessary violation of human dignity (Trolley Problem) due to incommensurability of human life. So does contemporary utilitarianism, since it adds human dignity. Virtue ethics solves some problems but doesn't constitute an ethical calculus since actions themselves aren't immediately relevant to virtue, only acquired habits. And moral nihilism completely misses the point (ignoratio elenchi).

The whole point of "thought experiments", on the other hand, is to showcase the above mentioned distinctions. They're not primarily intended as refutations of any particular theory, but rather as benchmarks through which one might compare given theories.

On the other hand, not every system warrants a immanent critique in so far as it is supposed to describe some object is commonly given to all. If "good" is believed to be some intuitive yet objective notion, such as in the case of moral realism, then if a system produces particular troubling results (utility monster, for classical utilitarianism; or the refugee problem, for deontology), that would constitute a valid argument against the system. Otherwise, then you've just conceded the moral nihilists point (there is no objective morality), and hence this whole discussion is meaningless. Or perhaps morality is purely subjective.

Thought experiments could then influence people's preferences toward any system or another. In such a case they would have a rhetorical strength based on the degree in which they reflect people's particular thoughts about "good". Whether this or not should constitute a valid argument against or in favor of any system is questionable, but I'd like to think that any proponent of a system based on correct psychological evaluation of what is "good" would be convinced of validity of some other system, even if not for himself, on basis of others' preferences.

That someone judges deontology to be the best system for him would suffice to justify his own choice if he believes that is "good", just as me (being a utilitarian) judging any action to be the best course of action would suffice to justify my own choice. Even if others' choices wildly differ from mine, this doesn't entail that they are any less pleasurable to them than my own choices are to me.

Is this a better answer to the questions posed in the opening post?




[Return][Go to top][Catalog][Nerve Center][Cancer][Post a Reply]
Delete Post [ ]
[]
[ / / / / / / / / / / / / / ] [ dir / animu / hikki / imouto / rel / senran / shota / strek / v4c ]