>>2338
>While it is true that we cannot know if a pleasing action is "good" in the long run, we can make "predictions" as to whether or not this is likely.
Consider whether there's any difference between belief in a predicted value of X, and belief that an unknown process X outputs a particular value x. The fundamental problem still stands, the real value still stands. Conceding that there is a problem would necessitate a shift from trying to maximize benefit to developing behaviors that consistently maximized benefits, which is effectively a form of virtue ethics.
This point can be put in more technical terms by referencing the problem of "rational" and "hyperbolic" discount rates. Economics is a branch of applied mathematics which was greatly influenced by moral philosophy, mainly through Adam Smith, Jeremy Bentham and John Stuart Mill. Rational agents, when presented with an inter-temporal optimization problem with unknown delay between choice and reward will consistently make errors, and their behavior will shift from a rational discounting function to a hyperbolic discounting function, the latter being more present biased.
And so, for example, a person who has borrowed money in order to buy a car might feel, as the payment date closes in, the temptation of defaulting on his debt, though this might be bad to him later
on. Or keeping a promise made might seem increasingly hard, or a good deed made might seem in retrospect a foppish affectation. The only way to bypass this problem (effectively akrasia) is by developing heuristics which maintain consistently good behavior, and outright defining how much I should reward or indulge myself.
As for your questions, I don't see how utilitarianism would be the "most rational" system of ethics. All systems of ethics have problems, because they either lack some features which would be desirable or because they have some unintended consequences. If there was such a thing as a "best" system (without any failures), it would have been certainly declared to be such by now. While one might believe that there some people might be mistaken about ethical beliefs or might remain unconvinced by demonstrations, it's absurd to believe that 2500 years of philosophical tradition rest entirely on mistakes and equivocation.
Classical utilitarianism, which allows interpersonal utility comparisons, completely misses human dignity and could possibly justify all sorts of cruelties so long as they produce a net utility increase. Deontology always struggle when choices imply a necessary violation of human dignity (Trolley Problem) due to incommensurability of human life. So does contemporary utilitarianism, since it adds human dignity. Virtue ethics solves some problems but doesn't constitute an ethical calculus since actions themselves aren't immediately relevant to virtue, only acquired habits. And moral nihilism completely misses the point (ignoratio elenchi).
The whole point of "thought experiments", on the other hand, is to showcase the above mentioned distinctions. They're not primarily intended as refutations of any particular theory, but rather as benchmarks through which one might compare given theories.
On the other hand, not every system warrants a immanent critique in so far as it is supposed to describe some object is commonly given to all. If "good" is believed to be some intuitive yet objective notion, such as in the case of moral realism, then if a system produces particular troubling results (utility monster, for classical utilitarianism; or the refugee problem, for deontology), that would constitute a valid argument against the system. Otherwise, then you've just conceded the moral nihilists point (there is no objective morality), and hence this whole discussion is meaningless. Or perhaps morality is purely subjective.
Thought experiments could then influence people's preferences toward any system or another. In such a case they would have a rhetorical strength based on the degree in which they reflect people's particular thoughts about "good". Whether this or not should constitute a valid argument against or in favor of any system is questionable, but I'd like to think that any proponent of a system based on correct psychological evaluation of what is "good" would be convinced of validity of some other system, even if not for himself, on basis of others' preferences.
That someone judges deontology to be the best system for him would suffice to justify his own choice if he believes that is "good", just as me (being a utilitarian) judging any action to be the best course of action would suffice to justify my own choice. Even if others' choices wildly differ from mine, this doesn't entail that they are any less pleasurable to them than my own choices are to me.
Is this a better answer to the questions posed in the opening post?