muflax65ngodyewp.onion/content_blog/morality/meta-meta-morality.mkd

6.9 KiB
Raw Blame History

title date tags techne episteme slug
Meta-Meta-Morality 1970-01-01
:wip :speculation ?p=934

Ich stampfe durch den Dreck bedeutender Metaphern,
Meta, Meta, Meta, Meta für Meter...
-- Die Interimsliebenden, Einstürzende Neubauten

Let's introduce a New Terminology. (Because you're not a real crackpot until you have your own lingo.)

Morality is the question "What should I do in this particular situation?". This is different from questions like "What do I want to do?" or "What do I know how to do?".

Think of morality like a mathematical function "moral :: (Situation, Action) -> Boolean". It takes a given situation and a proposed action and tells you if the action is moral or not, i.e. whether you should do it or not. The purpose of moral philosophy is to identify this function.

Faced with an unknown situation, like the Trolley Problem, we need to figure out what action to take. Essentially, we are faced with the set of all possible actions and we need to identify the moral ones. Let's call this search Morality, with a capital M, or short M1.

Unfortunately, we sometimes get stuck, and fail to find moral actions, or are uncertain about the moral value of proposed action. Instead of trying to solve this problem, we use a clever trick and attempt to solve a different problem, one that, once solved, will shed light on Morality. Instead of looking for actions, we now look for a method to find actions. We ask ourselves, abstractly, what criterion should I use to identify correct actions? Let's call this search Meta-Morality, or short M2.

To clarify the difference between M1 and M2, let's look at the different outputs they might produce. M1 deals with things like "push this fat man", "eat this cow" or "pray to this god". M2 goes up a level and gives you rules like "eat no animals", "all lives are of equal value" or "always speak the truth".

Of course we might get stuck on M2 as well and we can repeat the trick by moving on to Meta-Meta-Morality, or M3. M3 is conventionally known as meta-ethics, as it deals with systems of rule-selection, like "consequentialism", "deontology" or "divine command theory".

Most theoretical moral philosophy is done in M3, while practical moral philosophy (aka sila) tackles the problem of how to implement M2.

Unfortunately again, we can get stuck in M3.

And now comes an important insight, and I say this in full crackpot hubris, that nearly all of moral philosophy gets wrong.

You cannot justify a level with a lower level. The Arrow of Justification always points downwards.

The major failure of meta-ethics is to justify M3 theories through M1 or M2 implications.

"We can't accept consequentialism because then we might end up pushing fat men in front of trolleys and that's horrible!" is simply invalid as an argument. M3 ("judge actions by their outcomes") can't be disproven by M2 ("it is wrong to push men in front of trolleys").

If you have a conflict on M3, you must make an argument on at least M4.

Of course, to ensure the correctness of your justifications, you must keep the levels separated. Here's an example of failing to do so, committed by Eliezer, no less:

And if that disturbs you, if it seems to smack of relativism just remember, your universalizing instinct, the appeal of objectivity, and your distrust of the state of human brains as an argument for anything, are also all implemented in your brain. If you're going to care about whether morals are universally persuasive, you may as well care about people being happy; a paperclip maximizer is moved by neither argument.

[...]

In thinking that a universal morality is more likely to be "correct", and that the unlikeliness of an alien species having a sense of humor suggests that humor is "incorrect", you're appealing to human intuitions of universalizability and moral realism. If you admit those intuitions - not directly as object-level moral propositions, but as part of the invisible framework used to judge between moral propositions - you may as well also admit intuitions like "if a moral proposition makes people happier when followed, that is a point in its favor" into the invisible framework as well. In fact, you may as well admit laughter. I see no basis for rejecting laughter and accepting universalizability.

The mistake is that "accept (near-)universal values" and "laughter" are at different levels, with the first being M3 and the second M2. They are not comparable and have different standards of justification. As such, an M3 concern always overrides an M2 concern, and so universalizability is genuinely stronger than laughter.

(Eliezer is of course right about "universalizability" itself being a human value, in the sense that not all possible minds might be convinced by it. This claim rests on moral externalism, which I'm beginning to have doubts about, but this is besides the meta point.)

And it seems like there are two camps, one defending M1 ideas as fundamental, basically

And the basic idea seems to be, "I am here with a wild mixture of beliefs on many meta levels, many of them M1 and M2, but some M3 and upwards. Unfortunately, and mostly due to the blind selection process that created this mess, not all of these beliefs are transparent to me, or consistent under reflection, or non-contradictory. The goal of moral philosophy is apply already existing high-meta (M3+) methods to transform the existing beliefs until they are consistent under reflection, etc..".

Essentially, this treats moral beliefs as a flawed (pseudo-)formal system, and moral philosophy as the application of existing rules until no more flaws exist.

If you're sufficiently sinful, then you're screwed. If no transformation away from sin exists that conforms with your already existing rules, then you can't take this path. Moral leaps of faith are impossible.

I find it hard to make an analytical argument against this. But it disgusts me to the core.

There are two kinds of people: those who trust in Substance and those who trust in Meta.

As all my views on morality are motivated by M4 and upwards, you can guess which camp I belong to.

The whole procedure is inherently incremental. You work on meta-levels to bring clarity to lower levels, make your restrictions more precise to exclude more candidates and so on. If, however, you have already excluded enough possibilities, then no further meta-work is needed. (It is also not necessarily fractal. At some, all meta-levels might be sufficient and you're done, forever.

Meta-emotivism.

A simple criterion I've started to use is locality.

Another is the rejection of moral luck.

(Incidentally, the song Die Interimsliebenden is something I'd love to talk about, but just can't 'cause it's not in English and I utterly fail at producing even a barely adequate translation. I have a draft about this futility, and it might be related to inter-subjective value comparisons, but alas...)