>>79046
As a counterexample, let's take utilitarianism. Its core tenet–morality is maximized when utility is maximized and/or suffering is minimized–is simple enough when taken at face value. And the idea that how moral an action is depends on the well-being it creates also makes sense on the surface, since most people will agree that causing suffering is not a moral act. However, once you try doing what ancaps have done with the NAP, and follow the principles of utilitarianism to their logical conclusion in a variety of scenarios, you start to run into problems.
The most obvious one is what Red here demonstrates pretty well: most people are in agreement with the idea that saving a hundred is "better" than saving ten. A large number of people, although not quite as many as before, will also agree that, given the choice, killing ten is "less worse" than killing a hundred. However, outside of simple, quantifiable binary models such as these, the correlation between utilitarianism and conventional ideas on morality start to break down. Exterminating the homeless to make cities cleaner, less prone to crime, and more pleasant on the eye is generally seen as immoral. But utilitarianism claims that it would be perfectly okay to do this if the arithmetic balances, and the utility lost by the dead minority of homeless people is less than the utility gained by everyone else in the city. And so, in order to make their theory reflect reality (and thus continue to be a functional theory that accurately explains present actions and predicts future ones), you need to add some kind of caveat. Perhaps that instigating a loss of utility we could call that, oh, I don't know, AGGRESSION or something like that against an innocent person isn't morally admissible.
Then there's the other problem with the utilitarian proposal: "utility" is a subjective, ill-defined concept, yet the theory calls for us to not just quantify it but to do arithmetic with it. We must take this ever-varying, arbitrary notion, and attach a constant, numerical significance to it. Is the utility of giving a thousand children lollipops more or less than the anguish of a single child losing his beloved pet? What about ten thousand children? A million?
Or, let's say I was brutally torturing one person on live video, but I was streaming it to perverts that were taking immense sexual pleasure from watching this. Let's also assume that it's somehow confirmed that the utility of a /d/egenerate chatroom orgasming was greater than the utility lost by the poor soul being tortured. What if there's a power outage and, unbeknownst to me, the live feed gets cut? Is the exact same action, with the exact same intent, now the product of a sadistic barbarian rather than a selfless entertainer giving pleasure to audience? Do we need to add yet another caveat to the core tenet about actor's intent? In that case, is telling someone that they've been unknowingly creating atrocities immoral, because through that knowledge of greatly decreased that person's utility?
Because there are all these holes and contradictions in the base theory of utilitarianism, it's rather easy to attack it and discredit it. You can try and fix this by adding caveats, exceptions in specific circumstances, et cetera. But even if you manage to catch every little problem that can come up, what you're left with isn't utilitarianism at all. It's an unsightly patchwork of philosophical legalese, undoing or limiting so much of the original theory's key aspects that it can no longer be said to have a voice in its own framework. Instead, all you have is a bizarre, ineffective conglomeration of intuitionism and a half-baked form of the NAP. Utilitarianism has lost its simplicity, its elegance, its internal consistency, and with all of that its efficacy as a theory.