mirror of
https://github.com/fmap/muflax65ngodyewp.onion
synced 2024-07-05 11:20:42 +02:00
31 lines
1.5 KiB
Markdown
31 lines
1.5 KiB
Markdown
---
|
|
title: Meta > Intuitions
|
|
date: 2012-05-02
|
|
techne: :wip
|
|
episteme: :believed
|
|
---
|
|
|
|
This post isn't entirely serious, but it serves an important purpose. Hopefully, it will make a case for using meta arguments and a priori reasoning and against intuitions and contingent data, at least in some situations.
|
|
|
|
It should convince you just enough that there's something to it, that the "let's look at people's brains" school of metaethics is misguided in some ways. Nothing more, nothing less.
|
|
|
|
It starts with a [clever tweet][tweet truth] by Will Newsome:
|
|
|
|
> Preference utilitarianism is like aggregating everyone's beliefs and calling the aggregate Truth. That's not how justification works.
|
|
|
|
I'll add some less clever corollaries:
|
|
|
|
- Asking people thought experiments and calling a harmonization of their answers "morality" is like asking students math problems and calling the aggregate "calculus". That's not how thought experiments work.
|
|
|
|
- Looking at brains to separate values from biases is like looking at machine code to separate features from bugs. That's not how intent works.
|
|
|
|
- Bounded utility is like saying that calculus only works for numbers with up to 7 digits. That's not how universal laws work.
|
|
|
|
- Moral relativism is like arguing that not all cultures wearing green hats is evidence for some not wearing *clothes*. That's not how attractors work.
|
|
|
|
- Having priorities in your values is like saying that multiplication is more important than addition. That's not how orthogonality works.
|
|
|
|
-
|
|
|
|
|