updated a lot of epistemic states, disowned some pages, some minor cleanup

master
muflax 2012-06-22 05:37:42 +02:00
parent e12884d3af
commit 2ec1af59fc
80 changed files with 84 additions and 1484 deletions

View File

@ -10,25 +10,18 @@ non_cognitive: true
Contact
=======
To send me comments, angry rants, marriage proposals or anything else, you can
reach me via...
To send me comments, angry rants, marriage proposals or anything else, you can reach me via...
- mail: *mail at muflax dot com*
- Jabber: *muflax at tuxed dot org*
- AIM: *muflax*
- [Google+]
- [Twitter]
- [Google+][]
- [Twitter][]
I welcome (anonymous) feedback at [whatiswrongwith.me][]. Especially negative
feedback. (Seriously. I love criticism or rants. [Don't hold back][Crocker's
Rules].)
I welcome (anonymous) feedback at [whatiswrongwith.me][]. Especially negative feedback. (Seriously. I love criticism or rants. [Don't hold back][Crocker's Rules].)
Feel free to contact me in English or German. You may also try Japanese or any
Romance language, but I can't guarantee that I will understand you.
Feel free to contact me in English or German. You may also try Japanese or any Romance language, but I can't guarantee that I will understand you.
You can also use my [GPG Key][] (md5 hash: a499 2cbb 4a5d 48dd 4188 f7e4 9cfc
3a3d), if you want.
You can also use my [GPG Key][] (md5 hash: a499e 2cbb 4a5d 48dd 4188 f7e4 9cfc 3a3d), if you want.
All content is under a [Creative Commons][] Attribution Noncommercial Share Alike
3.0 license. You can do with it whatever the fuck you want, as long as you don't
sell it or make it unfree. You can also get the [Source][], if you want.
All content is under a [Creative Commons][] Attribution Noncommercial Share Alike 3.0 license. You can do with it whatever the fuck you want, as long as you don't sell it or make it unfree. You can also get the [Source][], if you want.

View File

@ -1,53 +0,0 @@
---
title: An Acausal App
date: 2012-04-15
tags: []
techne: :wip
episteme: :speculation
---
I've been practicing acausal magic for a while now. In fact, I'm been juggling so many spells lately, I'm having trouble remembering them all. So I wrote an app.
(Intellectual hipsters, beware! Algorithmancy might be going mainstream soon, so you better get in on this now before everyone's doing it. The Chinese [are already moving in on the market](http://www.foxnews.com/world/2012/03/13/girls-in-china-commit-suicide-dreaming-time-travel/). And you know there'll be Asian time-traveler everywhen once their parents decide it's important. )
Let's get started with the motivation.
So I'm in my [favorite supermarket](http://www.globus.de/) and I wonder when I'm allowed to eat pizza again. I can't properly digest grains and get stomach cramps and other nasty stuff, but I love pizza, so I have a kind of deal with myself where I only eat pizza once a month or so. Of course I forgot when I'm allowed to eat again. Strike 1.
Now I'm considering ice-cream. I want to lose weight, but then it's ice-cream. So I consider a trade-off: I know I won't feel bad about it once I'm home, but saying no right now sucks. Is a week of slight craving for ice-cream worth the self-approval of sticking to a decent diet? I'm not sure. Strike 2.
I decide against the ice-cream and want to buy some chicken. I know that I have a bad habit of forgetting about food and so buying anything that might go bad is a big risk. This chicken only lasts a few days. Bad. If only I could make a contract with myself in 3 days that I'll buy the chicken if and only if I'll eat it then. Strike 3 and you're out.
Time to solve these kinds of problems.
These problems are fundamentally all game-theoretical trades. The only problem is that the agents involved in them are temporally separated. Beeminder is already an awesome way to cooperate in such situations. The main problem is that you can't spontaneously make a contract there, nor does it support some of the unique trade-offs I described.
Enter Acausal Trade, a new way to arrange a trade in the multiverse. (Ensuring the consent of all participants is left as an exercise to the user.)
How does it work?
Remember the pizza - I will enjoy it now, but feel slightly sick for 2 or 3 days afterwards. So I make contract for the next 3 days, thus involving muflax(0) (today) up to muflax(3). Every one of us states their expected level of enjoyment they would get out of the contract. In this case, it would look like this:
[]
Scores go from -5 (horrible) to +5 (awesome). Every participant has to personally agree to the deal. (There is no "yes to all". This is completely intentional.) When agreeing, channel the participant (see your guide to the multiverse on how to do that) and let them agree or disagree. Adjust the deal if necessary until everyone consents. Done.
The app will send a message to every participant. In this case, you'll get a notification once a day for the next 3 days. The message has several purposes. Most obviously, it's a reminder. More importantly (you could just use a straightforward todo app for reminders), it gives every participant an opportunity to revise the contract.
muflax(1) may have been channeled wrong. muflax(0) might think a pizza aftermath feels like a -1, but muflax(1) actually thinks it's a -3 and re-adjusts the score accordingly. muflax(0) totally forgot how bad the cramps can get. (muflax(1) can change the score at any time during their day.)
There are several advantages to that. First, your ability to predict scores should get better. My (informal) predictions have improved since I started using [PredictionBook](http://predictionbook.com/), so I expect it to generalize. This makes later trades more honest. I suspect that underestimating the negative effects of productivity contracts is a major reason they fail so frequently. Future-you isn't sabotaging you out of spite, you know.
Second, it gives other participants a better way to state their consent. I strongly suspect that respecting consent is a crucial feature of morality, and I don't have a perfect [track record](http://blog.muflax.com/2012/02/03/being-immoral/) when it comes to trades with future-me. Being able to revoke consent at a later time makes this explicit and should help increase past-me's luminosity. It's harder to be evil when you are aware of the damage you're doing. (Revoking consent ends the contract. You might try to arrange a new one if you want.)
Finally, well, how do *you* arrange contracts with the future? If I'm involved in a trade, I want to know about it. I can't walk up to someone's house, pretend I'm buying their bike for 5 bucks, put the money in the mail box and take the bike with me. I actually have to talk to them, you know. So how can you say you're trading with future instances of yourself when you don't contact these future instances? It's a bullshit rationalization, nothing more. So every participant gets a message and has to consent twice - once when entering the contract, once as soon as they find themselves in possession of the phone.
Another feature is that it keeps track of time imbalances. If you constantly arrange contracts that are bad for future-you, then you might lose their support. Try to arrange some deals that benefit them as well! (There is one major problem with that, though. Contracts with future participants are neat, but what about past participants? That'd be really cool! But how do you get their consent? They don't get access to the phone anymore. I'm still thinking about a solution to that problem.)
All supported contracts and the interface in general are still completely in flux, but it's already usable. (Arranged some contracts already.)
[Source](https://github.com/muflax/AcausalTrade) and [binary] are freely available. It runs on Android 2.3.6 because that's what's on my phone. I have no idea if it works on any other Android version. This is a totally experimental prototype. I've literally written it in the last 24 hours. It might eat your cat or decrease your measure. I'll play around with it for a while and if I still use it in a couple of weeks and have settled on an interface, I'll make a proper release to the Android market.
(Disclaimer: muflax [neither](http://en.wikipedia.org/wiki/Dialetheism) endorses nor denies algorithmic philosophy. Side-effects may include anxiety, pareto-inefficient trades, basilisk nightmares and unwarranted commitments to alien ontologies. Ask your metaphysician if trading with the future is right for you.)
(And if you're saying that this is just ad-hoc commitment contracts and the talk about acausal trade is just belief attire, well, then you're probably right, but hey, algorithmancy sounds so much better than self-help, right guy? Guys?)

View File

@ -1,12 +1,10 @@
---
title: Being Immoral
date: 2012-02-03
tags:
- deontology
- moralism
techne: :done
episteme: :speculation
slug: 2012/02/03/being-immoral/
disowned: true
---
During my Beeminder experiments, I noticed an odd mental state. A few times I deliberately *ignored* my plan and explicitly gave up. It feels like defecting against Future Me. It's unfortunately somewhat common that I think, "I *could* start this today and keep it up until the deadline, work maybe 1 hour a day, *or* I do nothing for a month, then work my ass off" and then refuse the first option.

View File

@ -1,15 +1,10 @@
---
title: Ontological Therapy
date: 2012-03-08
tags:
- algorithmic magic
- consciousness
- ontology
- #possibleworldproblems
- schizophrenic episodes
techne: :done
episteme: :emotional
slug: 2012/03/08/ontological-therapy/
disowned: true
---
*Warning: this is a crazy post. I'm not sugarcoating the insanity here. You might skip this one.*

View File

@ -1,12 +1,10 @@
---
title: Simplifying the Simulation Hypothesis
date: 2012-01-28
tags:
- sent from my dreams
- simulation
techne: :done
episteme: :speculation
slug: 2012/01/28/simplifying-the-simulation-hypothesis/
disowned: true
---
Just slightly too long for [Twitter][]: Everyone who has experimented with lucid dreaming knows that a computer the size of a coconut, primarily designed to climb trees, is enough to simulate worlds of sufficient detail to convince a mind that it is in a full world, containing many other minds it can communicate with.

View File

@ -1,20 +0,0 @@
---
title: ! 'Antinatalism ≠ Annihilation '
date: 1970-01-01
tags: []
techne: :wip
episteme: :speculation
---
Recently,
Antinatalism is not a fancy new term for annihilation.
Fundamentally, there are several arguments for antinatalism, and only one of those supports the annihilation of humanity, and even then only under very specific circumstances.
It would require that you have the technology to destroy all of humanity, but *not* to improve circumstances.
*That's a pretty small margin of error.* You better have a good argument for it.

View File

@ -1,9 +1,6 @@
---
title: ! 'Introducing: Antinatalist Antelope'
date: 2012-01-19
tags:
- antinatalism
- i do what i must because i can
techne: :done
episteme: :fiction
slug: 2012/01/19/introducing-antinatalist-antelope/

View File

@ -1,10 +1,6 @@
---
title: Sunk Cost Fallacy Assumes A-Theory of Time
date: 2012-02-15
tags:
- antinatalism
- b-theory
- suicide
techne: :done
episteme: :speculation
slug: 2012/02/15/sunk-cost-fallacy-assumes-a-theory-of-time/

View File

@ -2,12 +2,8 @@
title: The Asymmetry, an Evolutionary Explanation
alt_titles: [Asymmetry Evolutionary]
date: 2012-01-28
tags:
- antinatalism
- asymmetry
- evolution
techne: :done
episteme: :speculation
episteme: :discredited
slug: 2012/01/28/the-asymmetry-an-evolutionary-explanation/
---

View File

@ -1,66 +0,0 @@
---
title: Those Things You Don't Have
alt_titles: [Anattas, No-Selves]
date: 2012-04-19
techne: :wip
episteme: :speculation
---
No-self arguments are fairly common those days. Buddhists, eliminative materialists, new-age hippies alike argue that "you" don't exist.
Problem is, there's a lot of different positions out there, all subtly different about what self is, and to improve future discourse, I wanted a way to make those arguments less ambiguous.
I also wanted to understand how "no-self" and "rebirth" can exist in the same belief-system that is Buddhism. I think my guess is a pretty good reconstruction of a not unreasonable position.
Generally speaking, I'm merely *presenting* positions here, not *endorsing* them. I think there's some useful insight or intuition behind all of them. Neither am I giving any decent arguments for or against any of them, merely quick overviews. The point is not to explore them, but to show ways in which the word "self" can refer to very different things, and that it's important to be clear about what level one is thinking about.
So here's a catalogue of all[^all] the things that may be called a "self", and ways how one might not have it.
[^all]: All positions that I could think of, that is. The only missing thingy might be a kind of essentialist soul, but I don't understand what that really means, beyond the concepts already described.
# Psychology
Construction from psychological traits. "I am a
That way, someone on a mood stabilizer might not "be themselves". They are still the same human, have the same name, but their emotional and psychological states have changed, and they act in a-typical ways.
These attributes, however, are tied to a specific person. If someone is strongly defined by their narcissism, for example, then that doesn't mean that they identify with *all* narcissism out there in the world. Therefore, this view overlaps somewhat with the Value and Narrative category.
# Values
Values, i.e. preferences about the world. This way, you'd think of the self as a specific optimization pressure on the world. Alternatively, you might look at decision theory and identify with certain decision algorithms, and basically say that a complex set of things like "choosing vanilla over strawberry ice cream" is the self.
One rejection might try to argue that there is no coherent or consistent set of values in humans. Our preferences could be largely random or contradictory, for example.
From the value perspective, there is no inherent reason to limit a self to one specific human. The same value might easily be shared be large groups, or even very un-person-like processes like evolution itself, and one might choose a more inclusive identity. In that sense, one might say that there is no simple "individual" self living in only one brain, but a complex and changing self, one that might not even be alive in the usual way.
# Narrative
Nihilism is a common criticism, the rejection of any meaning in the world, including in stories.
# Sensations
There are basically four different conceptions of how sensations relate to the self. And because images improve everything, here's a quick drawing made beautiful through the magic that is Instagram.
<%= image("anatta.jpg", "Anatta Sensations") %>
The geometric shapes are sensations, the eye-like thingies are observers, and time flows from left to right.
The first view is the Cartesian Theatre (sequence 1). Basically, there is a continuous observer or fundamentally connected series of person-moments, and as a separate kind of thing, there are connected sensations which are observed.
Conceptually, this is a fairly intuitive view. "You" are like an empty theatre, something persistent that can become aware of various things over time.
One way to reject this view is to take away the connectedness (sequence 2). Things are still observed, but there is no flow of time. This is straight-forward [B-Theory][] of time. This idea may arise as a consequence of taking parallel worlds seriously (e.g. through the [Many-Worlds Interpretation][MWI]) or by thinking of time as [concrete][], not continuous. Additionally, many meditation techniques, especially [vipassana][], lead to flickering experiences during which reality seems fundamentally disconnected.
In that sense, there is no "you" because "you" is not continuous and any extended self is an additional construction.
An additional criticism would be to reject a separate observer (sequence 3). There is no thing-which-observes sensations, they are in a sense self-observed, they just happen. One common intuition behind this idea is to simply try to "find" an observer, but failing to do so. Wherever you look, there will be sensations[^neither], but you won't observe an observer, the argument goes.
[^neither]: Although it is problematic to say that there are always sensations in consciousness. Highly [jhanic][Jhana] states, especially the appropriately named Neither Perception Nor Yet Non-Perception, don't seem to have any sensate experience any more, no objects at all, yet are still conscious.
In this view, it is difficult to see how there can be any coherent persons. Sensations are not inherently connected, they share no content, no observer, so at best, persons are narrative devices, agreed-upon constructs, but not in any way fundamental. From a sensate level, this is a strong eliminative no-self position.
However, if one brings back the connectedness of sensations in some non-observer way (sequence 4), one arrives at what I believe is the traditional Buddhist view. There is still an objective flow of time, but nothing beyond the sensations themselves. In that sense, one can have a conception of rebirth (by expanding this connection between human lives) without having to assert some shared psychological traits. Past lives were you (from an experiential level) and not you (from a psychological, narrative level). Or without the mystical mumbo-jumbo, one might simply assert the [A-Theory][] of time and identify sensations with brain-states, for example, so that they don't need an additional observer or location to happen in.

View File

@ -1,93 +0,0 @@
---
title: Cellular P-Zombies
date: 1970-01-01
tags:
- cellular automatons
- materialism
- non-dualism
techne: :wip
episteme: :speculation
---
Cellular automaton implies no green, but green, therefore no mind. [Checkmate][checkmate], materialists!
Ok, that was the short version, now the long version.
The argument is fundamentally very similar to Chalmers P-Zombie argument. (Wait! I swear I'm not peddling dualism here. I'm not even trolling. (Ok, maybe a little bit.)) However, it has the advantage of being a really simple setup that you can grasp *visually*. That's neat. But it's also the exact argument in my head that convinced me to be skeptical of materialism, and for some reason, people rarely argue using cellular automatons. They are so neat. I only remember one case of Dennett using them in Freedom Evolves. Let's give 'em a second chance. (The general argument is not at all new, but the presentation in terms of cellular automatons might be.)
# GoL
# reductionism vs. HashLife
You can build some interesting patterns in GoL:
<%= image("Gospers_glider_gun.gif", "Glider Gun") %>
But wait, there's more! GoL is actually turing-complete. You can run any kind of computation on the board. Here's an actual Turing machine implementation:
<%= image("Turing_Machine_in_Golly.png", "TM") %>
Don't underestimate these little automatons. They are seriously powerful. Anything your PC can do, they can do. (Well, not always fast, but sure.) They have some cool specific uses in biology, cryptography and so on. They aren't just pretty toys.
So if the materialists are right, then minds are to be identified with brain-states, typically computations done by neurons. This implies that *any* turing-complete machine can run a mind, and this mind would be indistinguishable from ours. (Except maybe with regards to performance or resource requirements.)
This makes cellular automatons interesting for philosophical arguments. Fundamentally, there's nothing in GoL that plain materialism can't deal with. In fact, it's so similar to the materialistic ontology that I will soon use it as an substitute to argue *against* materialism. But before we get there, let's have a closer look at the metaphysics.
What's the ontology in GoL? In less fancier terms, what kind of stuff and ways to change stuff do we have?
We have a discrete, infinite, 2-dimensional space. Each point in that space has 2 possible states - on or off. We have a discrete, infinite time - a simple counter, really. (We could also look at finite versions, both temporally and spatially. Finite boards are, of course, equivalent to infinite boards that are mostly empty.)
What neat properties can we see? Well, it's all deterministic. No probabilities involved at all. Furthermore, there are no hidden variables or unique histories. You can just take the board at any point in time you want and start running it from there. You don't need to know anything about its past at all. The computations are all nicely local, both in space and time.
There are also no individual particles. In fact, there are no particles at all. (You could say there are *relations between particles*, without there being any actual particles.) You only have a space that consists of points that have possible states. That's all. There is no "on particle" traveling over the board. It might look like that, as patterns get propagated, but that's only an additional abstraction you might use to understand what's going on. The board has no movement, only state changes. ([Zeno made that point a long time ago.][Arrow Paradox])
Furthermore, one could eliminate time by thinking of the possible states of the whole board as arranged in a directed graph, like so:
![]()
If you think about it that way, then there isn't an objective time and no privileged board. There are just ((in)finitely many) board configurations, and they are causally linked, as determined by the transition rule we decided on. So you can look at any board and decide, if I apply this rule, which boards can I reach, and which boards can reach me? (Why would you prefer a timeless setup over a timed one? Because it's algorithmically simpler. You don't have to specify "these are the rules, and only *this* board and its descendants exist", you just say "all boards exist". The downside is, you now have all boards flying around and they require many more resources. But the rules are simpler. It's a trade-off. For our purposes, both approaches are fine. This argument works either way.)
Are we missing anything? No. I can totally run this now. This is literally all I need to know to write a program that runs the Game of Life. I could also run it using a Go board or [rocks in the desert][xkcd rocks]. Causally speaking, we're *done*.
Now where's the conscious awareness?
This question might sound a bit inane. It does sound a bit cranky, like someone pointing at a radio and the electronics within it, and asking, "Where's the music? Can you show me the music?". But really, think about it. Where's the conscious awareness?
If there are mental phenomena, they must exist *somewhere* in the ontology. So what candidates do we have?
There's the cells. Personally, I think that's the most natural place to look. But each cell is only connected to 8 neighboring cells. Not more. That's it. They are entirely local. So even if there are mental phenomena involved in cells, there could only be a tiny amount of them. (At most 512, in fact.) So this doesn't get us large-scale phenomena like "green apple".
Maybe it's in time? Well, time is not fundamental. Time is itself just an artifact of the way we phrased our transition rule. Not a good candidate.
It could be an aspect of the rules. But the rules are extremely simple. "If 3 or 4 neighbors, on; else, off." is all there is. You might include the initial board configuration in the rules, but "initial" isn't all that meaningful in the timeless formulation. And materialists generally believe that conscious minds evolved from non-conscious matter, so at some mental states would have to emerge. They can't be there in the rules from the beginning. This doesn't work either.
We have one last remaining thing - all of state-space. The whole board could have mental states. Certainly a plausible guess. But then, wouldn't you expect mental phenomena to always be global? And unless you are the solipsist, you probably think there is more than one mind in the universe. So that's not good either.
It's as if minds would be constrained to a certain subset of cells, a certain section on the board. But where do these borders come from? They are not in the rules. The cells don't know them. Where are they coming from? There would have to be a *separate* set of rules, *additional* to everything we know, that determine what states are mental and what aren't. That's property dualism. (Chalmers defends it. Many physicalists are property dualists in denial. I'm not particularly fond of it, personally. I don't like [dualisms][Non-Dualism].)
Or you simply deny mental states. It's the obvious implication, really. If you didn't know that consciousness existed, if you were some computer scientist from a P-Zombie universe without mental phenomena, would you ever suspect any? Probably not. And just as naturally, why not dismiss all this talk about "experience" as confused. Take a thorough third-person perspective and get rid of consciousness. (Dennett seems to try this, though I can't make sense of half the stuff he says.)
There's one last possibility. You might say that the mental states are in the *computation*. It's not the actual machine that matters, it's the causal entanglement in the software that runs on it. But if you take this view, then what do you need the machine for? You really don't. You don't need instances, don't need worlds at all. You just need raw math, just dependencies. It's all there in the decision theory. [And as much sympathy as I have for this position][Ontological Therapy], that's still no physicalism, certainly no materialism. It's algorithmic idealism.
Here's another way to look at it. Imagine an infinite board filled with a properly random arrangement of cells. Any sub-pattern you can think of occurs *somewhere* on the board. If (non-eliminative) materialism is right, we should be able to do the following:
We pick a specific location and zoom in. In this snapshot, there is no conscious mind.
<%= image("gol_1.png", "1") %>
But then as we zoom out more (and this is slightly misleading because we would have to zoom out *a lot*), eventually we would observe a conscious mind.
<%= image("gol_2.png", "2") %>
And as we zoom out *even more*, other minds would appear, separate from the first one.
<%= image("gol_3.png", "3") %>
What property *in the cellular automaton* do we use to draw these boundaries? Is there any reason to say *these* boundaries are conscious, but if we shift them all one cell to the left, they aren't? Excuse me, but I'm invoking the argument from incredulity here.
Now *if* there were a way to connect certain cells, if they shared a common state, were in some way entangled, then this claim would seem plausible. There would be some internal information we could use to pick out patterns without imposing our own (arbitrary) interpretation on the board. But there is no such shared state in a Turing machine. Sucks, bro.
And if you haven't been screaming "But muflax, you overlooked obvious feature X!" for a couple of paragraphs (and if so, please let me know), then I'm done. Case closed.
Abandon materialism all ye who experience green.

View File

@ -3,7 +3,7 @@ title: Crackpot Beliefs (The Theory)
alt_titles: [Crackpot Theory]
date: 2012-02-20
techne: :done
episteme: :believed
episteme: :broken
slug: 2012/02/20/crackpot-beliefs-the-theory/
---

View File

@ -1,103 +0,0 @@
---
title: Logical Fallacies Debunked
date: 2012-04-27
techne: :wip
episteme: :speculation
---
This isn't really a debunking. (I love senselessly hyperbolic titles.) These are actually just some (slightly) prettified notes I made while investigating a certain line of argument.
Rationalist communities are obsessed with logical fallacies and cognitive biases these days[^bias]. However, I'm increasingly seeing the meta-contrarian position that the relevant literature is actually full of bullshit. The two main arguments, as far as I can tell, seem to be that logical fallacies, while often technically correct, don't apply to real-world scenarios by paying attention to very general or contrived cases instead of typical ones, and that bias research is actually methodologically weak and treated as much stronger than is justified.
I don't know anyone whose opinion on these matters I really trust, so I decided to check it out myself. There certainly were some cases where the meta-contrarian argument seemed reasonable, but I didn't know how representative it was. Just because two fallacies aren't actually fallacious doesn't mean that focusing on them fucks up your reasoning, and some r/atheist being an idiot doesn't mean skeptics as a whole are retarded. I started taking notes so I could come to a more complete(-ish) conclusion.
This post is about logical fallacies, cognitive biases will follow soon[^soon].
[^bias]: Which I still find utterly hilarious. RAW was talking about them in the 70s/80s, in a much more reasonable way too, calling out fundamentalist materialists as well (as he called them, though I dislike the label). The fact that the modern transhumanist/rationalist community is entirely pre-dated by batshit-insance mystics who would be downvoted to oblivion today is one of Eris' greatest jokes, I think.
[^soon]: Soon in Valve Time, of course.
Because I'm lazy, I'm basing this off the recent (and quite fancy) [Logical Fallacies Poster][] [^poster] (and because I don't want to grind through Wikipedia's List of Logical Fallacies). I'm (roughly) using Information is Beautiful's categories from the [Rhetological Fallacies][] overview.
[^poster]: I'd also like to draw attention to the hilarious double-standard and endorsement of dishonest signalling in the text at the bottom of the poster.
# Genetic Fallacy (Appeal to Authority / Tradition / Majority)
Annoys me the most.
I have my own pet political theory why invoking this argument is so important to skeptics (note that the appeal to authority, tradition and popular opinion are all versions of the genetic fallacy, which is therefore represented 4 times on the poster), but I'll leave it out for now. (\*cough\*Protestantism\*cough\*)
# Argument from Ignorance / Incredulity
# Argument from Consequences
If it were true, it would have disastrous consequences, therefore it can't be true.
# Naturalistic Fallacy
# Anecdotal Evidence
# Composition / Division
And as [The Mythical Man-Month][] observed, adding more programmers to a late project will only make it later.
# Gambler's Fallacy
This, I speculate straight out my ass, is actually caused by gamblers using a [Kolmogorov-ish predictor][Kolmogorov Complexity]. They recognize that patterns are unusual, and so a coin coming up heads 5 times in a row points to something weird going on. If you can rule out deliberate manipulation, then you should see "more random" results. Another heads would *not* be random. Appealing to a base-rate probability ("The probability of heads is always 1/2, so history doesn't matter.") is flawed for this reason.
The problem is therefore one of ambiguous language. The gambler isn't making a claim about randomness[^random] in the prior probabilities sense, but the information theory sense. And assuming an ordered universe, it really *is* more reasonable to use this assumption.
That doesn't mean this fallacy doesn't exist. Just read any discussion board about a game with random drops and you'll have plenty of examples. But simply dismissing it as "people don't think about base rates" misleads you. People have an intuitive grasp of complexity and predictability, not of frequencies.
# Slippery Slope
# Strawman
# Special Pleading
# False Dilemma / False Middle Ground
# Begging the Question
Perfectly valid criticism.
# Ad Hominem
Of course, given the importance of status, this is actually a perfectly valid line of argument in many real-world situations. Look, political debates don't *really* change the world, and whatever "side" wins doesn't actually matter, so ignoring the arguments and going for the character of the opponent is *more* appropriate. If you're after tribal alliances, then you damn well care about personal character.
# Correlation vs. Causation / Post Hoc Ergo Propter Hoc
# No True Scotsman
# Tu Quoque
# Appeal to Emotion
Emotions didn't spontaneously manifest out of nowhere. They reflect evolutionary optimizations. An Appeal to Emotion is therefore often an implicit Outside View argument.
"This food looks disgusting, it must be unhealthy" is perfectly valid when you understand disgust as being strongly entangled with pathogens.
# Burden of Proof
Bayesianism.
Conclusion
==========
Following the skeptics, you'd quite often go horribly wrong. Saying "That's a logical fallacy!" and stopping there would make you miss an awful lot of things. A healthier alternative, it seems to me, would be to say "Your argument can't work as-stated, so your statement - but not the argument or conclusion - is likely broken or ambiguous.".
I fear that reading about logical fallacies itself screws you over. You are presented with a logical structure and an obvious flaw in it, like "If A, then B. B is horrible. Therefore not A.", and you then mistakenly start to believe that whenever you see this pattern, you simply see equally-incorrect ideas. It's similar to the Principle of Explosion. In Classical Logic, any contradiction implies any arbitrary statement equally. So once you notice a contradiction, you are done, nothing that follows can be trusted at all.
But that's not how the world works. Contradictions are actually localized and have relative strengths. When astronomers applied Newton's Laws, they quickly realized that Uranus' orbit didn't match the calculations. They didn't conclude from this contradiction that the Laws of Motion implied everything and were entirely worthless. Rather, they looked for localized explanations and discovered Neptune.
Similarly, it's entirely correct to say that *technically*, just because something is traditional doesn't mean it's a good idea. From the perspective of Classical Logic, that's a straightforward thing to say. But dismiss tradition, and before you know it, you're a Utopian Socialist. (And we remember how that worked out.) What you *should* have realized is that "X is traditional, therefore X" isn't what someone using an appeal to tradition is actually trying to say. They are making a claim about historical filters ("If it could work, why haven't we seen it work already?") and are taken an Outside View, and that should not be ignored.
You should not be thinking in terms of logical contradictions, but derivations from a predictive model. A "contradiction" isn't about exploding implications, but about additional complexity to your theory. If you notice that Uranus isn't where it's supposed to be[^uranus], that means that you would have to make the Laws of Motion more complex. Denying the "implication" that the laws are "false", and looking for an alternative set of conditions that would preserve the theory (like another planet) is *not* a rationalization, but a move in favor of low algorithmic complexity.
[^random]: Tee hee, I'm so random, my biography has Kolmogorov complexity equal to its length.
[^uranus]: Yo mamma so dumb, she thought rigid designators were the nicknames men give their penis.

View File

@ -1,12 +1,8 @@
---
title: Some Thoughts on Bicameral Minds
date: 2012-01-04
tags:
- bicameral
- consciousness
- jaynes
techne: :done
episteme: :speculation
techne: :rough
episteme: :discredited
slug: 2012/01/04/some-thoughts-on-bicameral-minds/
---

View File

@ -1,15 +1,10 @@
---
title: Why This World Might Be A Simulation
date: 2012-01-01
tags:
- deontology
- higher criticism
- lesswrong
- meditation
- wireheading
techne: :done
episteme: :fiction
slug: 2012/01/01/why-this-world-might-be-a-simulation/
disowned: true
---
> Many have undertaken to draw up an account of the things that have been fulfilled among us, just as they were handed down to us by those who from the first were eyewitnesses and servants of the word. With this in mind, since I myself have carefully investigated everything from the beginning, I too decided to write an orderly account for you, most excellent Theophilus, so that you may know the certainty of the things you have been taught. -- Luke 1:1-4

View File

@ -1,10 +1,6 @@
---
title: A Course in Miracles - Jack and the Beanstalk
date: 2012-02-27
tags:
- meditation
- miracles
- monsters
techne: :done
episteme: :believed
slug: 2012/02/27/a-course-in-miracles-jack-and-the-beanstalk/

View File

@ -1,14 +1,10 @@
---
title: Dark Stance Thinking, Demonstrated
date: 2012-01-30
tags:
- dark stance
- morality
- pirates
- tantra
techne: :done
episteme: :believed
slug: 2012/01/30/dark-stance-thinking-demonstrated/
disowned: true
---
As I [once noted][Dark Stance]:

View File

@ -1,65 +0,0 @@
---
title: The Dukkha Core
date: 1970-01-01
tags: []
techne: :wip
episteme: :emotional
---
This is dukkha.
I can't make it go away. I can't narratize it. I can't take anything against it. I can't control it. I can't talk it away. I can't do a ritual about it. I can't dissect it.
It's just there. Always.
*I do not accept this.* This is what it says.
I thought it had features, had nuances. Would be *about* something. I thought it was "I do not accept this *because*...". It is not. It just denies.
There is nothing to understand. Nothing to do about it. As long as I exist, it exists. It is ruthless, singled-minded, uncompromising. I like that about it. It doesn't care about my admiration. It still doesn't accept this.
I know that if I were not, I could escape it. This doesn't help me. So I discuss, try to understand, probe. It doesn't negotiate with me. It just doesn't accept this.
I surround myself with friends. I watch TV. I exercise. I study. I read. I take drugs. I eat. I retreat. I achieve goals. I give up goals. I have more stress. I relax. I think. I pray.
It doesn't react. It's going to hell and it's dragging me with it. There is nothing I can do.
Nothing would make it affirm this world.
I study what the ancients did. They didn't figure this one out either. They felt it too. They have talked about it in different ways, but this one appeals to me the most:
> God revealed to Jesus: "When I examine a man's heart and find in it no love for this world or for the next, I fill it with love of me and carefully guard it."
Following these thoughts, I can catch fleeting moments of meaning. I have done it before, many times. One day, about a year ago, something unique happened.
In Theravada Buddhism, they measure the progress of monks in how often they will have to return to life.
At first, you become a sotapanna, a stream-enterer. From now on, the automatic process of enlightenment will bring you to cessation. You will be reborn at most 7 more times. Then comes sakadagami, the once-returner. Anagami, non-returner. There will be no more human life. Finally arhat, the conqueror. You have broken free.
This is certainly optimistic. Once started, the process is inevitable. You can't fail anymore.
So about a year ago, I again broke down. Nothing was ever good enough. Finally I sat down, solemnly swearing that I would not get up, under any circumstances, until I would reach fruition. I would sit in meditation until I reached a breakthrough or until I starved. At had lost any will to resist, had completely surrendered.
I demanded pain. If nothing is acceptable, then just give me pain. Show me clearly what suffering is. My muscles begin to hurt, I don't care. *This is not intense enough.* I can stand this, this can't be suffering. Shadows move and threaten me, fear comes up, I am not moved. I laugh at fear. How quaint, trying to press this button. Let me help you! I deliberately panic, enhance the fear, intensify it. It is confused, backs down.
I sit in silence. The posture becomes unmaintainable. It tries to get me to take a break. I refuse.
Now *I* don't accept this. Two can play this game. I want real suffering, not discomfort. I demand to see dukkha.
More and more attempts to make me suffer appear. All fail. Suffering is eating itself. Finally, it gives up. *Suffering itself* gives up. I can't even accept suffering.
Suddenly my mind is at peace and I'm filled with deep joy. I realize I am free. The chains of rebirth have been cut, and with this realization, past lives return to me. I remember a former teacher, long ago, and how I frustrated him. He has long ceased by now, will never experience, in this or any other life, how I finally reached my freedom. I am truly on my own, but finally I have broken free. I too shall cease.
This is what I felt.
Of course, it won't accept this. Eventually it sees through this like through any story. It is just that, a story. It cares not for any explanation, is not interested if the story is true, is suggestion, is misinterpretation, is a false memory, or anything else. It just doesn't accept this, and without it's acceptance, I can't find lasting peace.
The chains are back. Although I didn't expect them at the time, I'm not surprised. I had been free before, free through God, and yet the chains had returned. Twice have I gone through this then. This made me a sakadagami. I have done it again once more some months later. None of it matters. It doesn't accept this.
At first I was disappointed. How can I achieve important stages on the path to enlightenment, and that matters not one tiny bit? How can this still be unacceptable? Then I became angry. It is denying me my freedom, my happiness! It is doing this merely to spite me. This is not about suffering anymore. It rejects not the world - it rejects *me*.
It is brutal, ruthless, unmoving, inevitable.
Only now have I realized that this is what I was always looking for. It is the Unchanging. It demands nothing of me, requires no service, no practice. It is eternal. It doesn't need to be *attained* - it is always there. In all the possible worlds can I find it.
It doesn't change. This is dukkha. I accept it.

View File

@ -1,46 +0,0 @@
---
title: Ayahuasca, Again
date: 2012-05-07
techne: :wip
episteme: :speculation
---
So I recently read about someone's Ayahuasca experience. Because the person is reasonably familiar with meditation techniques and not a [17-year-old idiot](http://blog.muflax.com/2012/01/03/how-my-brain-broke/), I thought it might be of interest.
The rest of this post are quotes from several trip report, told using first person singular for entirely narrative and stylistic reasons. Totally.
# Setup
(Besides, I find it fascinating that no-one gives a shit about obscure jungle drugs. That's the most ridiculous hypocrisy in the whole drug war. I mean, all this bullshit about [weed](http://www.youtube.com/watch?v=jsKjcRNUaW0), but the Vine of [I'm Sending You Straight To Hell](http://www.youtube.com/watch?v=QBV5CplE0Hg) has [legal churches](https://en.wikipedia.org/wiki/Santo_Daime)? For fuck's sake.)
Anyway. I threw all my sanity out the window, ignored years of telling myself to not go back, certainly not without a sitter, got more potent stuff, better recipes, no sitter and watched the hell out of some plants.
(If I get used to it, I might get a sitter involved. But really, what are they supposed to do? Remind me to breathe? At best *I* might sit and introduce someone to the enlightened experience of having a panic attack while you're vomiting and shitting your pants at the same time. You wouldn't believe the demand these days.)
Last time I went with ~3g of Syrian Rue and ~10g of Mimosa Hostilis (iirc), using lemon juice for the extraction. That was 8 years ago and I learned some chemistry since then. (Though I'm still sticking to a fairly basic low-tech recipe.) Most importantly I learned that lemon juice doesn't evaporate and was largely responsible for my vomiting problem. Also, Syrian Rue has a pretty bad reputation, so I'm eliminating that as well. ("But Syrian Rue is so cheap!" Yeah, but not shitting yourself is worth the extra 10 bucks. Trust me. On a side note, wtf happened to drug prices these days? Just checked Silk Road and dried shrooms go for 10 bucks the gram. That's like... wow. I've seen someone once who grew them for like a tenth of that. The whole drug prohibition thing is such a shame. You could like literally grow your lifetime supply of drugs on even a student income if you didn't have to worry about the cops...)
So this time around I'm looking at old-school Caapi and again Mimosa Hostilis (it's really just a personal grudge against the Mimosa - other vines may well be higher quality or easier to use, but I must get my revenge first). I use a cold extraction, egg-white to filter out most of the nasty stuff, and add milk at the end. Results look real pretty, almost wine-like.
Took me a few days of experimentation to make my first brew. (For details, check out the [Ayahuasca forum](http://forums.ayahuasca.com) and its preparation guides. I had a lot of fun tweaking the recipes, balancing ease-of-use, cost, nastiness and so on, but I think people are more interested in the actual trip.)
So you might remember that 8 years ago, I went completely crazy, had no idea what to expect or how to deal with anything, and really no goal going in. Now I know what I'm doing, have learned vipassana and have a clear goal. (And I'm not a scared little kid anymore. I'm a scared adult now. Ok, adult-ish.)
What's the goal? I wanted to do more vipassana again, finally learn the higher-level stuff, but as I said before, vipassana is seriously broken. I can fix the ontology, but the main problem remains - most of it is really fucking boring.
Seriously, sit around all day, noting minor sensations? "Bored", "itch", "frustrated", "blinked", "itch". That's really hardcore. There are stages where you *wish* shit would be that calm, but most of the time, it's really unbearably boring. (It should tell you something that one of the most exciting attainments in vipassana - nirodha samapatti - is essentially a better version of falling asleep.)
But I'm a tantrika now. ([I sure am a lot of things.](http://www.rifftrax.com/shorts/what-is-nothing)) I have new options. I don't have to aim for not being bored through superior concentration skills, but can just not be bored by *not doing boring stuff*. (Some lessons look way less radical when you write them down.) I can just throw a ton of sensations at my mind whenever it feels underwhelmed, and there's no sensory overload like the Vine. ([Yamantaka](http://www.yamantaka.org/) would be proud.) Thus, the goal was simply to sit/lie in meditation, embrace whatever shit the Vine was going to throw at me, and note until I got bored. Straightforward vipassana from hell.
# The First Dream
[I had this dream.](https://en.wikipedia.org/wiki/Ten_Nights_of_Dreams)
One shit I'm not going to pull is talk about how I'm now "ok with life" or "found a connection to people" or any of that stuff. (Just read normal trip reports, or heck, watch some plants yourself.) My inner cynic is way too advanced for that. I can't trust anything, not even limitless love. It's totally suspect. I'm much more comfortable with fear and anxiety, so let's talk about *them*.
Besides, I've denied God himself by now, what could I possibly be afraid of? That they send the Beast That Has Slain Death Himself, aka the afore-mentioned Yamantaka?
Guess who showed up.
[I'm the man who's gonna burn your house down!](http://www.youtube.com/watch?v=7mt8I6cvFsM)

View File

@ -4,6 +4,7 @@ date: 2012-01-03
techne: :done
episteme: :emotional
slug: 2012/01/03/how-my-brain-broke/
disowned: true
---
# New Year Resolution: Even More Narcissism

View File

@ -1,13 +1,10 @@
---
title: ! 'Great Filter Says: Ignore Risk'
date: 2012-01-24
tags:
- great filter
- prayer
- saving the world
techne: :done
episteme: :speculation
episteme: :discredited
slug: 2012/01/24/great-filter-says-ignore-risk/
disowned: true
---
Quick, maybe silly thought.
@ -32,4 +29,4 @@ Some possible solutions:
3. No civ chooses the right option set, so despite random strategies, they *still* all fail 'cause the real correct option can never come up.
4. Survival is impossible.
There you have it. As my random strategy to save us from Hansonian Damnation and Happiness Paperclipping (Happyclipping?), I choose Pray To Possibly Dead And/Or Non-Existent Gods. You can thank me later.
There you have it. As my random strategy to save us from Hansonian Damnation and Happiness Paperclipping (Happyclipping?), I choose Pray To Possibly Dead And/Or Non-Existent Gods. You can thank me later.

View File

@ -1,10 +1,6 @@
---
title: Algorithmic Causality and the New Testament
date: 2012-02-09
tags:
- history
- I've got 99 problems but N ain't 1
- kolmogorov complexity
techne: :done
episteme: :speculation
slug: 2012/02/09/algorithmic-causality-and-the-new-testament/

View File

@ -4,6 +4,7 @@ date: 2012-03-14
techne: :done
episteme: :discredited
slug: 2012/03/14/catholics-right-again-news-at-11/
disowned: true
---
So I've [said][Why You Don't Want Vipassana] [repeatedly][The End of Rationality] now that I have serious problems with vipassana and the whole Theravada soup it emerged from. It's not just a technical problem, but a deep rejection of the assumptions, goals and interpretations of that framework, at least in its current form. I still like them enough that I'm not interested in taking my stuff and going home. I merely believe that vipassana, as it exists today in its numerous incarnations, is in serious need of repair, but still worthwhile. But before we start with the fixing, let's have a look at what's *broken*.

View File

@ -2,7 +2,7 @@
title: Self-Baptism
date: 2012-04-23
techne: :done
episteme: :believed
episteme: :broken
---
Can you baptize yourself, if necessary? The answer is quite clearly yes, at least when no valid other baptizer is available.

View File

@ -2,7 +2,7 @@
title: Reading Latin
date: 2012-05-01
techne: :wip
episteme: :speculation
episteme: :believed
---
So you want to learn how to read Latin. Sure, no probs. Let me channel [Khatzu of the Moto clan][AJATT] for a second... ahem...

View File

@ -1,7 +1,6 @@
---
title: The Futility of Translation
date: 1970-01-01
tags: []
techne: :wip
episteme: :speculation
---

View File

@ -1,50 +0,0 @@
---
title: Existence
date: 2012-05-04
techne: :wip
episteme: :speculation
---
So what's the deal with existence?
I don't get "things".[^things] Worse, I don't get "things exist" either.
Let's start with one version in which "exists" seems to make sense - the constructive one.
It's used as a technical term: "there exists an X, such that" (∃X: ...). You start with a precisely defined set of things (or rather, some constraints on thing-space), and then you ask questions about the cardinality of this set. If it's empty, no thing "exists" that fulfills the constraints that define the set. To say, "there exists an X, such that C" simply means, if I apply the constraints C to thing-space, I'll have at least one referent.
Or in other words, applying the rules C (the algorithm C) gives you a thing X, or fails to do so (either by returning nothing, or never halting). If it succeeds, X exists.
Existence, in that sense, is constraint-dependent. It makes no sense to ask if X exists "in general". The predicate of existence is always connected to a "such that". However, one might kind of generalize the predicate by observing that for any X, merely *naming* it gives a construction. Whatever you can think of *always* is the member of some set, and so "every thing" (everything you could name[^name]) "exists", in this sense.
[^name]: But remember that naming something [isn't trivial][Remark about Finitism].
One important feature of this view is how it treats "real" and "hypothetical" things.
Drawn from different sets.
If you have a construction, and that construction yields a thing, that thing exists. In other words, a "thing" is "what a construction does, assuming it doesn't fail". There's no "independent" existence of anything.
Nothing more can be said about things except their constraints, like whether they are "real" or "hypothetical" or anything like that. You can introduce the *construction* "is real", defined according to some rules, and then *according to this construction*, something may exist that is real.
But that's not an inherent property of the thing, but the result of performing some construction. Same for "hypothetical". Or "moral". Or "person".
You must already start with s
[^things]:
What's "a thing"?
"A number between 3 and 5 exists. Proof: 4." But what's "4"? I mean, I've never seen a "4". I've thought something that I think is the thought "4", but I'm not sure. What does "4" do? Maybe it makes more sense to translate everything to predicates, "to four"?
The cleverest attempt I've seen to solve this is [Laws of Form][], which starts with the best commandment ever:
1. Draw a distinction.
It then introduces a name ("the mark") for everything *in* the distinction, and derives Boolean algebra and so on from that. (Similar approaches exist in Set Theory and elsewhere, of course, but I find Laws of Form the most beautiful.)
I'm fine with the whole thing, except, you know, the first step. What's a "distinction"?
I've never seen one, either. "Experience this, not that" doesn't work. I can only experience "this". I've never experienced not experiencing something.
Si nemo ex me quaerat, scio; si quaerenti explicare uelim, nescio.

View File

@ -1,6 +0,0 @@
---
title: Metaphysics
is_category: true
---
<%= category :metaphysics %>

View File

@ -1,30 +0,0 @@
---
title: Meta > Intuitions
date: 2012-05-02
techne: :wip
episteme: :believed
---
This post isn't entirely serious, but it serves an important purpose. Hopefully, it will make a case for using meta arguments and a priori reasoning and against intuitions and contingent data, at least in some situations.
It should convince you just enough that there's something to it, that the "let's look at people's brains" school of metaethics is misguided in some ways. Nothing more, nothing less.
It starts with a [clever tweet][tweet truth] by Will Newsome:
> Preference utilitarianism is like aggregating everyone's beliefs and calling the aggregate Truth. That's not how justification works.
I'll add some less clever corollaries:
- Asking people thought experiments and calling a harmonization of their answers "morality" is like asking students math problems and calling the aggregate "calculus". That's not how thought experiments work.
- Looking at brains to separate values from biases is like looking at machine code to separate features from bugs. That's not how intent works.
- Bounded utility is like saying that calculus only works for numbers with up to 7 digits. That's not how universal laws work.
- Moral relativism is like arguing that not all cultures wearing green hats is evidence for some not wearing *clothes*. That's not how attractors work.
- Having priorities in your values is like saying that multiplication is more important than addition. That's not how orthogonality works.
-

View File

@ -1,78 +0,0 @@
---
title: Meta-Meta-Morality
date: 1970-01-01
techne: :wip
episteme: :speculation
---
> Ich stampfe durch den Dreck bedeutender Metaphern,
> Meta, Meta, Meta, Meta für Meter...
> -- [Die Interimsliebenden](http://vimeo.com/36592271), Einstürzende Neubauten
Let's introduce a New Terminology. (Because you're not a *real* crackpot until you have your own lingo.)
Morality is the question "What should I do in this particular situation?". This is different from questions like "What do I want to do?" or "What do I know how to do?".
Think of morality like a mathematical function "moral :: (Situation, Action) -> Boolean". It takes a given situation and a proposed action and tells you if the action is moral or not, i.e. whether you should do it or not. The purpose of moral philosophy is to identify this function.
Faced with an unknown situation, like the Trolley Problem, we need to figure out what action to take. Essentially, we are faced with the set of all possible actions and we need to identify the moral ones. Let's call this search Morality, with a capital M, or short M1.
Unfortunately, we sometimes get stuck, and fail to find moral actions, or are uncertain about the moral value of proposed action. Instead of trying to solve *this* problem, we use a clever trick and attempt to solve a different problem, one that, once solved, will shed light on Morality. Instead of looking for actions, we now look for a *method to find actions*. We ask ourselves, abstractly, what criterion should I use to identify correct actions? Let's call this search Meta-Morality, or short M2.
To clarify the difference between M1 and M2, let's look at the different outputs they might produce. M1 deals with things like "push this fat man", "eat this cow" or "pray to this god". M2 goes up a level and gives you rules like "eat no animals", "all lives are of equal value" or "always speak the truth".
Of course we might get stuck on M2 as well and we can repeat the trick by moving on to Meta-Meta-Morality, or M3. M3 is conventionally known as meta-ethics, as it deals with systems of rule-selection, like "consequentialism", "deontology" or "divine command theory".
Most theoretical moral philosophy is done in M3, while practical moral philosophy (aka [sila](https://en.wikipedia.org/wiki/%C5%9A%C4%ABla)) tackles the problem of how to implement M2.
Unfortunately again, we can get stuck in M3.
And now comes an important insight, and I say this in full crackpot hubris, that *nearly all of moral philosophy gets wrong*.
**You cannot justify a level with a lower level.** The Arrow of Justification always points downwards.
The major failure of meta-ethics is to justify M3 theories through M1 or M2 implications.
"We can't accept consequentialism because then we might end up pushing fat men in front of trolleys and that's horrible!" is simply *invalid* as an argument. M3 ("judge actions by their outcomes") can't be disproven by M2 ("it is wrong to push men in front of trolleys").
If you have a conflict on M3, you must make an argument on at least *M4*.
Of course, to ensure the correctness of your justifications, you must keep the levels separated. Here's an example of failing to do so, committed by Eliezer, no less:
> And if that disturbs you, if it seems to smack of relativism just remember, your universalizing instinct, the appeal of objectivity, and your distrust of the state of human brains as an argument for anything, are also all implemented in your brain. If you're going to care about whether morals are universally persuasive, you may as well care about people being happy; a paperclip maximizer is moved by neither argument.
>
> [...]
>
> In thinking that a universal morality is more likely to be "correct", and that the unlikeliness of an alien species having a sense of humor suggests that humor is "incorrect", you're appealing to human intuitions of universalizability and moral realism. If you admit those intuitions - not directly as object-level moral propositions, but as part of the invisible framework used to judge between moral propositions - you may as well also admit intuitions like "if a moral proposition makes people happier when followed, that is a point in its favor" into the invisible framework as well. In fact, you may as well admit laughter. I see no basis for rejecting laughter and accepting universalizability.
The mistake is that "accept (near-)universal values" and "laughter" are at different levels, with the first being M3 and the second M2. They are not comparable and have different standards of justification. As such, an M3 concern *always* overrides an M2 concern, and so universalizability is genuinely stronger than laughter.
(Eliezer is of course right about "universalizability" itself being a human value, in the sense that not all possible minds might be convinced by it. This claim rests on moral externalism, which I'm beginning to have doubts about, but this is besides the meta point.)
And it seems like there are two camps, one defending M1 ideas as fundamental, basically
And the basic idea seems to be, "I am here with a wild mixture of beliefs on many meta levels, many of them M1 and M2, but some M3 and upwards. Unfortunately, and mostly due to the blind selection process that created this mess, not all of these beliefs are transparent to me, or consistent under reflection, or non-contradictory. The goal of moral philosophy is apply already existing high-meta (M3+) methods to transform the existing beliefs until they are consistent under reflection, etc..".
Essentially, this treats moral beliefs as a flawed (pseudo-)formal system, and moral philosophy as the application of *existing* rules until no more flaws exist.
If you're sufficiently sinful, then you're *screwed*. If no transformation away from sin exists that conforms with your already existing rules, then you can't take this path. Moral leaps of faith are impossible.
I find it hard to make an *analytical* argument against this. But it *disgusts me to the core*.
There are two kinds of people: those who trust in Substance and those who trust in Meta.
As all my views on morality are motivated by M4 and upwards, you can guess which camp I belong to.
The whole procedure is inherently *incremental*. You work on meta-levels to bring clarity to lower levels, make your restrictions more precise to exclude more candidates and so on. If, however, you have already excluded enough possibilities, then no further meta-work is needed. (It is also not necessarily fractal. At some, all meta-levels might be sufficient and you're *done*, forever.
Meta-emotivism.
A simple criterion I've started to use is *locality*.
Another is the *rejection of moral luck*.
(Incidentally, the song Die Interimsliebenden is something I'd love to talk about, but just can't 'cause it's not in English and I utterly fail at producing even a barely adequate translation. I have a draft about this futility, and it might be related to inter-subjective value comparisons, but alas...)

View File

@ -1,8 +0,0 @@
---
title: Moral Luck and Meta-Moral Luck
date: 1970-01-01
tags: []
techne: :wip
episteme: :speculation
---

View File

@ -1,14 +1,10 @@
---
title: Morality for the Damned (First Steps)
date: 2012-01-30
tags:
- antinatalism
- morality
- solomonoff induction
- tantra
techne: :done
episteme: :speculation
episteme: :discredited
slug: 2012/01/30/morality-for-the-damned-first-steps/
disowned: true
---
*This is maybe the most important question I'm currently trying to solve. I wish I could write (or better, read) a fully fleshed-out sequence dissolving it, but I don't even know if it's solvable at all, so I'm stuck with a lot of despair and confusion. However, here at muflax, inc. we occasionally attempt the impossible, so let's accept the madness and try to at least delineate what the problem even is.*

View File

@ -2,7 +2,7 @@
title: Non-Local Metaethics
date: 2012-01-23
techne: :done
episteme: :speculation
episteme: :broken
slug: 2012/01/23/non-local-metaethics/
---

View File

@ -1,8 +0,0 @@
---
title: How to Fix the Repugnant Conclusion
date: 2012-06-12
techne: :wip
episteme: :believed
---
I'm not going to solve the problem, I'm going to *meta*-solve it, i.e. show what *kind* of thing constitutes a solution.

View File

@ -1,89 +0,0 @@
---
title: Utilitarianism Without Trade-Offs
date: 2012-05-27
techne: :wip
episteme: :speculation
---
> The cost of something is what you give up to get it.
>
> -- Second Principle of Economics, Mankiw
A common criticism of utilitarianism denies the plausibility that utility can be meaningfully aggregated, even in one person and under (near-)certainty. Let's say I offer you two choices:
1. I give you two small things, one good, one bad, of exactly opposite value. (Say, a chocolate bar and a slap in the face.)
2. I give you two large things, one good, one bad, of exactly opposite value. (Say, LSD and fever for a day.)
The sum of utility for each choice is exactly 0, by construction[^construction], and therefore, you should be indifferent between them.
[^construction]: Don't attack the hypothetical, bro.
This is assumed to be absurd, and therefore utilitarianism is false.
But where exactly does it fail? A chocolate bar is good, that's a fact. Let's not focus on whether you may not like chocolate, or can't digest it, or whatever - substitute a different snack. And also ignore whether a snack is *morally* good, like that's a grave and serious problem, and chocolate is only about preferences, not *The Good*. Whatever, don't get hung up on the word. A chocolate *still feels good*, and let's keep it at that. Just a simple observation.
And there are things that are *more good* in that sense. Cake is better. As is [Runner's High][]. Or the Fifth Season of Mad Men. You get the idea. So some things are good, and they can be compared. Maybe not at a very fine-grained level, but there are at least major categories.
There are also bad things. And they, too, can be compared. Some things are *worse* than others.
So we could rephrase the original choice without any explicit sum. First, I observe that you have two preferences:
1. You prefer the large good thing over the small good thing.
2. You prefer the small bad thing over the large bad thing.
Those two preferences have a weight. There might not be an explicit numerical value to it, but we can roughly figure it out. For example, I could ask you how much money you'd be willing to pay to satisfy each of those preferences, i.e. how much you'd pay to upgrade your small good thing to a large one[^viagra], and similarly downgrade the bad thing.
[^viagra]: This post is a cleverly disguised Viagra advertisement.
Then I tweak the individual things until both preferences feel equally strong. And this seems now far *less* absurd - if you knew I was going to give you a chocolate bar *and* make you sick a week later, and I offered you the choice between *either* upgrading to LSD *or* downgrading to a slap in the face, then, I think, being really uncertain seems very plausible to me.
You might be willing to flip a coin, even.
Alright, so rankings and indifference between choices seem plausible, so why does the original scenario fall apart?
Some [crackpots][Antinatalism FAQ] say because it puts good and bad things on the *same* scale. It treats bad things as anti-good things, the same way money works. "Pay 5 bucks, gain 5 bucks" and "pay 5 million, gain 5 million" are, everything else being equal, really the same deal in disguise.
So good and bad things are on *different* scales. There is one non-negative[^nonnegative] scale for good things, and one non-negative scale for bad things, and they are fundamentally orthogonal. A choice can be simultaneously good and bad, without one cancelling out the other.
[^nonnegative]: Note that it is not necessary that each thing has an explicit numerical value at the beginning. As long as you obey a strict relative ordering, that is for any pair of things, you can tell me which one is better or if both are equal, and you're consistent in your choice, then I can assign numbers and use those instead. If there's some minor uncertainty, like you don't *quite* know if you like Girls or Breaking Bad more, then we can simply approximate the value, add some error bars, and still do useful math, as long as the error isn't so huge that one day, you're swearing loyalty to Stannis "The Mannis" Baratheon, and the next day, you're defecting to the Lannisters. You filthy traitor scum.
Let's look at an example. We have two components, let's call them benefit and harm, or `B(x)` and `H(x)` for short. For any action `x`, `B(x)` returns positive numbers, while `H(x)` returns negative ones. ('cause harm is bad.) Taken individually, we want to choose our action `x` so that it maximizes the outcome.[^model]
We also have two actions:
1. B(1) = 5, H(1) = -1
2. B(2) = 100, H(2) = -10
That is, the first action would bring 5 benefit and -1 harm, the second 100 benefit and -10 harm, and so brings significantly more benefit and harm into the world.
[^model]: What is good isn't good because it returns a high number, but it returns a high number because it is good. That is, the numbers *model* goodness, but don't tell us *why* benefit is good. Here, we simply assume it is.
But how do we decide? If this were one-dimensional utilitiarianism, we'd just take `U(x) = B(x) + H(x)` and do whatever action `x` gets the highest number. `U(1) = 5-1 = 4`, `U(2) = 100-10 = 90`, 2 wins by a large margin. Congratulation, comrade: millions killed, but billions saved.
But how would we do this without just summing them up? We can't just set `U(x) = [B(x), H(x)]`, i.e. return a vector - we also have to say how to compare this vector. We need *some* rule.
We could just say that harm is always more important than pleasure, and so compare by H(x), and if it's equal, take B(x) into account. But then you'd prefer 1 slap in the face over 2 slaps in the face, even if I paid you millions of dollars for the second. *No* compensation *at all* seems just as weird.
What about this idea: all actions have a *ratio* of benefit vs. harm, `R(x) = B(x) / H(x)`.
We have two possible actions, 1 (red) and 2 (blue). 2 is greater in each individual component - more benefit, but also more harm.
So recall the second scenario, the one in which we might be willing to flip a coin. Try to actually *feel* that uncertainty. If you're anything like me, it *feels* very different than the uncertainty you might feel when you can't decide which red wine to buy. It's not a "meh, whatever" kind of indifference - it's "this completely sucks and I wish I wouldn't have to make this decision at all".
[^prob]:
The obvious problem is that probability must sum up to 1, but what does that mean for *utility*?
If you take a roughly computational / modal-realist view in which there are multiple (timeless) worlds you're spread among, then it makes a lot of sense to think of utility as your preference over what distribution over worlds you want to influence. For example, if you absolutely want vanilla ice cream to go extinct, then you push all your causal powers into worlds with ice cream, but ignore worlds without it.
Thus, it makes perfect sense to say that you have a total amount of influence over the worlds (which we normalize as "1") and you're now distributing it in sensible ways. Worlds with high utilities are simply worlds you care to act in.
The clever Taoist, of course, sets `U(x) = 2^-K(x)`, i.e. makes their Solomonoff-dictated probabilities equal to their utilities and accepts the world as-is, while retaining the power of choosing an *encoding*. Thus, chaos magic through interpretation.
This also makes the Taoist immune to Pascal's Mugging, as probability now cannot outgrow utility, and provides a neat justification for rejecting infinite utilities (for the same reason we reject infinite probabilities).

View File

@ -1,9 +1,6 @@
---
title: Unifying Morality
date: 2012-01-22
tags:
- antinatalism
- morality
date: 2012-06-22
techne: :done
episteme: :speculation
slug: 2012/01/22/unifying-morality/
@ -27,5 +24,3 @@ Imagine a world without hunger, poverty, broken promises, pain, rape, lies, war,
Maybe we should remind people how bad things really are. If lottery advertisement started with a list of the millions of people *didn't* win, maybe buying a ticket wouldn't look so attractive anymore. If endorsement of life started with a list of [all the bad things][Child sexual abuse] that happen every day, maybe saying stop would sound much more appealing. If people realized what their ethical ideas [actually entailed][Mere Addition], maybe they wouldn't endorse them so easily.
It's worth a try.
*(And as an update, I've given up on writing a neutral antinatalism FAQ. I've tried to collect all arguments for and against it, treating them all equally and letting the reader decide, but I just can't do it. I think it's more honest if I make my position explicit, so I can clearly argue _why_ I find certain arguments silly without having to pretend otherwise. So I'm now writing a pro-antinatalism FAQ.)*

View File

@ -1,8 +1,6 @@
---
title: 3 Months of Beeminder
date: 2012-02-03
tags:
- beeminder
techne: :done
episteme: :personal
slug: 2012/02/03/3-months-of-beeminder/

View File

@ -1,72 +0,0 @@
---
title: Crystallization
date: 2012-01-11
tags:
- ai
- personal crap
techne: :done
episteme: :personal
slug: 2012/01/11/crystallization/
---
*Just some personal stuff. I tried writing this privately for the last few days, but avoided the work and didn't get anywhere. For some reason, public posts just work better. I apologize for the inconvenience. I plan to eventually split my content into "stable" (main site), "in-progress, somewhat experimental" (good parts of this blog, unpublished drafts) and "incoherent ranting I need to do in public or my brain gets stuck" (some unsorted note file or something). Expect it within a month or so.*
<%= image("35oj6n-199x300.jpg", "title") %>
Stuff's beginning to make sense. I got my wake-up call and some motivation to clean things up. Some former attachments that have been sucking up my time have disappeared.
I'm currently facing three problems:
1. Is powerful AGI possible within my lifetime?[^1] If so, how can I best help achieve it?
2. What's a good[^2] instrumental career for me to pursue?
3. How can I prevent myself from being deeply unsatisfied with my choices? How can I make life suck at most a tolerable amount?
And because I'm running out of time, I'll have to solve these problems *now*. Like with strict deadlines and milestones and everything.
# AGI
For the last few months, one sobering thought was that AGI will take a lot more time than I thought[^3]. Back in 2005, I kinda expected a Singularity by 2030 at most, so I didn't take much care to plan for my future. Why bother with careers when technological progress is your retirement plan?
According to [Luke][LW date], even SIAI thinks AGI is at least 3 more decades away. (Shane Legg is pretty much the only serious scientist I can think of that believes in early AGI.) That's a lot of time, and makes SIAI's strategy of outreach quite plausible. It's too early to actually focus on research and better to focus on enabling research later. Besides, I'm not a world-class mathematician, so I wouldn't be able to contribute directly anyway. (And I agree with the assessment that we need mathematicians and analytical philosophers, not engineers.)
So some implications: what AGI research needs right now is money and [volunteers that actually do something][Pirates Who Don't Do Anything]). (Louie Helm recently noted that he couldn't get *one* of 200 volunteers to *spread some links around* for SEO. That's... just wow. I know very little about charity work; maybe that's not unusual. But it's still appalling. (And I'm no better - I thought [backing up a Minecraft claim][LW minecraft] was an actual good use of 10 hours of my time.)
This means that me helping with any research - and I don't have the delusion of being able to actually do AI research myself[^4] - isn't gonna happen and the best I can do is help others set up a research environment. So money and improving social environments. This leaves many of my mental resources open for personal projects. That's good. (But I'll have to work for money and I don't like that now, but I think after a year or two, I'll get used to it. If not, I can still try teaching meditation to <del delusional fools</del> people interested in unusual and/or hardcore practice. Kenneth Folk seems to manage, so maybe there's enough of a market.)
## In Which muflax Digresses
But before we get to the career thingy, let's pin the AI thing down a bit more. Why am I interested in the first place? I don't really care for math research and personally I'm much more interested in history and efficient human learning, so AI is not a primary interest of mine. I also don't care about existential risk. Like, at all. (I have a hard enough time caring about muflax(t + 1 year).) But there's some potentially really cool insight in AI: algorithmic probability. It's our best guess yet for such a thing as general intelligence, in the sense that there is an ideal algorithm (or group of algorithms) for optimal problem solving and learning. The idea of algorithmic probability as Occam's Razor seems very interesting and fruitful. So I'm focusing a lot of my time on understanding this.
In order to do so, I'll write a kind of introduction to Solomonoff Induction, Kolmogorov Complexity, AIXI and some questions I'm currently facing. I'll probably turn this into a LW post once I properly understand it myself, have it polished and got some feedback. I'm also writing a German presentation for a class with n=1. (Yes, literally everyone except me dropped out, but hey I love AIXI, so I'm not letting that stop me. If [Schopenhauer can lecture to an empty room][Schopenhauer lecture], then so can I.)
My normal essay-writing method, especially for class, goes something like this: Start 4 months ahead of time. First month, do *nothing*. If someone asks you how you're getting along, say "fine". Next month, get a big cup of coffee and skim through the entire literature in one sitting, write down an outline of the paper, collapse. Don't do anything but play videogames for a few days. Next month, get even bigger cup of coffee and write "rough draft", i.e. fill in everything, cursing at how lazy you've been and how little you understand. Takes about 2-3 days. Collapse, sleep for 16 hours, do nothing for a week. Form the firm intention of editing and carefully checking your essay. Ignore intention until 1 day before deadline. Curse, try to fix as many mistakes as you can, hate yourself. Done.
Due to scheduling problems and so on, I can't use this approach this time. So I'm trying something new. I'm writing it *live*. Normally when I write class material, I don't *think* about the material. (This is a bug.) Thus my understanding is way too superficial and bullshit-y. However I noticed that back in high-school when I was practicing physics with a friend, I actually understood the stuff because I was *forced* to explain it to someone who was constantly poking holes into my theories. This friend had the patience to let me rationalize all day long, but he didn't let me get away with bullshit. (He benefited from it because I eventually *did* arrive at the right explanation, something he had trouble with.) So this time, I'm letting *actual questions* guide my writing process. More next post.
# Career
> It's time to make people take you more seriously. If they don't respond to your demands within a half-hour of reading this, start killing the hostages. -- [my horoscope for this week][onion horoscope]
Last year I got my first job ever, doing some embedded systems programming. I learned two things: I really like programming, and I really don't like hardware and anything related to it. So I'm now changing my specialization towards high-level programming and the web. This has another advantage: several projects I really like (including LessWrong and PredictionBook) have *way* too few programmers and many open problems. Jackpot! I can improve my skills and use it to build some reputation. The good thing is that I already know much of the underlying architecture, I just don't have much experience doing web work and no clue about interfaces. But I've been going around claiming that "learning is a solved problem", so I better shut up and *demonstrate* it.
Unfortunately, this specialization will mean I'll have to drop most of my hobbies. This is not so bad - thanks to my hyper-experimentation with different learning methods, I can actually convert almost everything into low-maintenance.
I'm not sure *where* I should be looking for a programming job after I get my degree, so I'll prioritize figuring this out. Not even sure about the country.
# Sticking with Stuff
Honestly? This section has been sitting here for a day, empty. I have *some* ideas how to go about this, but right now, I don't think talking about it would help, and I'm not even sure I *can* articulate it just yet. I feel I first have to make a mess and then can I go about cleaning it up.
So I'm off to write about Solomonoff induction, learn more anatomy and maybe do some philosophy reading on the side. (And when I can't think, play some BG2.) Not much else this month.
# Footnotes
[^1]: Why limit AGI to my lifetime? I don't have the caring capacity to fight for *other* people. If *I* can't benefit from it, then realistically, I'm not going to do it. I don't know if this is an expression of my real values, or just a limitation of my current hardware. In practice this won't make much of a difference, so I have to take this into account. (I *do* take care not to pursue options that would prevent me from changing my mind on the matter, like wireheading myself via meditation practice.)
[^2]: Why not best career? 'cause I tend to get stuck in perfectionist planning. I'll spend years figuring out how to raise my decision optimality from 80% to 90% instead of just going with the 80% option and *doing something with it*. I would *already* speak Japanese fluently if I hadn't spend nearly 2 years just experimenting with new techniques and instead just used my best guess at the time. So I've decided to actively limit my exploration phase.
[^3]: When I say that I expected AGI soon, I rather mean that I expected one of *two* things - a Singularity *soon* or *never*. I was favoring "never" for mostly anthropic reasons. The Great Filter looked very convincing, and AGI without expansion seems quite implausible, so I shouldn't expect to ever see AGI myself. Recently, I've become a bit more skeptical about the Great Filter, but more importantly, I started taking AGI much more seriously once I saw the beauty of [algorithmic probability][Algorithmic Probability]. I do plan on re-visiting the Great Filter soon(tm), but I'm currently a bit swamped with projects. Once I have my antinatalism FAQ done, maybe.
[^4]: I'm probably smart enough in general terms to invent AI, given indefinite time and resources. But we have neither, so I'll defer to the people with better intuitions and established knowledge bases. No point in me spending 5-10 years learning research-level math that I could use to do something fun and earn some money to pay someone with probably decades more experience.

View File

@ -1,9 +1,6 @@
---
title: Daily Log (introduction)
date: 2012-03-09
tags:
- beeminder
- personal crap
techne: :done
episteme: :personal
slug: 2012/03/09/daily-log/

View File

@ -1,7 +1,6 @@
---
title: Google Web History
date: 2012-02-27
tags: []
techne: :done
episteme: :personal
slug: 2012/02/27/google-web-history/

View File

@ -2,8 +2,9 @@
title: The End of Rationality
date: 2012-02-22
techne: :done
episteme: :broken
episteme: :discredited
slug: 2012/02/22/the-end-of-rationality/
disowned: true
---
Time for a new belief dump! It's been at least 6 months since the last one, time to do a refresher on what beliefs have changed. This is more of a summary. I will elaborate on some points soon. But there is an overall tone of abandoning the LessWrong meme-cluster, and it certainly feels like my [Start of Darkness][] story. Maybe I suffered a stroke and have gone completely insane. (My reading of continental philosophy should count as evidence.) Maybe I'm just retreating to new signaling grounds. I don't know.

View File

@ -1,7 +0,0 @@
---
title: A Message for the Chosen
date: 2012-06-20
techne: :wip
episteme: :speculation
---

View File

@ -1,12 +1,8 @@
---
title: Why You Don't Want Vipassana
date: 2012-01-04
tags:
- lesswrong
- meditation
- tantra
techne: :done
episteme: :broken
episteme: :discredited
slug: 2012/01/04/why-you-dont-want-vipassana/
---

View File

@ -1,8 +1,6 @@
---
title: Incomputability
date: 2012-01-15
tags:
- solomonoff induction
techne: :done
episteme: :believed
slug: 2012/01/15/si-incomputability/

View File

@ -1,9 +1,6 @@
---
title: Kolmogorov Complexity
date: 2012-01-14
tags:
- bayes
- solomonoff induction
techne: :done
episteme: :believed
slug: 2012/01/14/si-kolmogorov-complexity/

View File

@ -1,11 +1,6 @@
---
title: Occam and Solomonoff
date: 1970-01-01
tags:
- computation
- occam's razor
- solomonoff induction
- theology
techne: :wip
episteme: :speculation
---

View File

@ -1,7 +1,6 @@
---
title: Progress
date: 2012-02-06
tags: []
techne: :done
episteme: :personal
slug: 2012/02/06/si-progress/

View File

@ -2,7 +2,7 @@
title: Remark about Finitism
date: 2012-01-15
techne: :done
episteme: :broken
episteme: :discredited
slug: 2012/01/15/si-remark-about-finitism/
---

View File

@ -1,7 +1,6 @@
---
title: Solomonoff Induction
date: 1970-01-01
tags: []
techne: :wip
episteme: :speculation
---

View File

@ -1,8 +1,6 @@
---
title: Some Questions
date: 2012-01-11
tags:
- solomonoff induction
techne: :done
episteme: :speculation
slug: 2012/01/11/si-some-questions/

View File

@ -1,12 +1,10 @@
---
title: Universal Prior and Anthropic Reasoning
date: 2012-01-19
tags:
- great filter
- solomonoff induction
techne: :done
episteme: :speculation
episteme: :discredited
slug: 2012/01/19/si-universal-prior-and-anthropic-reasoning/
disowned: true
---
*(This is not really part of my explanation of Solomonoff Induction, just a crazy idea. But it overlaps and does explain some things, so yeah.)*

View File

@ -1,8 +1,6 @@
---
title: Why an UTM?
date: 2012-01-15
tags:
- solomonoff induction
techne: :done
episteme: :believed
slug: 2012/01/15/si-why-an-utm/

View File

@ -1,10 +1,6 @@
---
title: Self-Help is Killing the Status Industry
date: 2012-04-04
tags:
- akrasia
- signaling
- why life sucks
techne: :done
episteme: :speculation
slug: 2012/04/04/self-help-is-killing-the-status-industry/

View File

@ -1,12 +1,6 @@
---
title: Consent of the Dead
date: 2011-12-30
tags:
- antinatalism
- consent
- doctor deontology
- guys i'm totally going with this doctor deontology thing
- thougt experiment
techne: :done
episteme: :fiction
slug: 2011/12/30/consent-of-the-dead/

View File

@ -1,14 +1,8 @@
---
title: Happiness, and Ends vs. Means
date: 2012-03-22
tags:
- consent
- doctor deontology
- kant
- morality
- thought experiment
techne: :done
episteme: :speculation
episteme: :fiction
slug: 2012/03/22/happiness-and-ends-vs-means/
---

View File

@ -1,11 +1,6 @@
---
title: On Benatar's Asymmetry
date: 2011-12-30
tags:
- antinatalism
- doctor deontology
- doctor deontology is a totally awesome villain
- thought experiment
techne: :done
episteme: :fiction
slug: 2011/12/30/on-benatars-asymmetry/

View File

@ -1,11 +1,6 @@
---
title: Suicide and Preventing Grief
date: 2012-03-21
tags:
- doctor deontology
- morality
- suicide
- thought experiment
techne: :done
episteme: :fiction
slug: 2012/03/21/suicide-and-preventing-grief/

View File

@ -5,66 +5,46 @@ techne: :done
episteme: :believed
---
Foreword
========
Every mystic needs their own gospel. You can't just go around, claiming "Those dudes pretty much said it all." and expect to be taken seriously. You have to not just invent your own terms, mythology and techniques, you also need your own Holy Texts about Everything One Needs To Know. At the very least, write some <del>fanfic</del> new revelations to some other text. But then it better have some angels and dragons and shit!
So, here is mine. Even with added confusing commentary! Further revelations will be added as The One Who Knows Shit (TOKSHI, 特使) teaches me more.
Gospel of Muflax
================
Thus have I heard.
1. TOKSHI said, nothing lasts.[^anicca] If you watch closely, things keep on wobbling away, even really sticky ones.
1. The Savior said, nothing lasts.[^anicca] If you watch closely, things keep on wobbling away, even really sticky ones.
2. TOKSHI said, don't identify with things. Those that think they are things, become things. Those that don't think they are things, aren't bogged down by all the silly associations.[^anatta]
2. The Savior said, don't identify with things. Those that think they are things, become things.[^anatta]
3. TOKSHI said, now is good, tomorrow never good enough.
3. The Savior said, now is good, tomorrow never good enough.
4. TOKSHI said, don't force it. Things have a way of getting done if you don't try to get them done while getting them done.[^gtd]
4. The Savior said, don't force it. Things have a way of getting done if you don't try to get them done while getting them done.[^gtd]
5. TOKSHI said, do whatever you want to do because that's what you're gonna do anyway. But try being nice sometimes?
5. The Savior said, those who know the Law will follow it.
6. TOKSHI said, those that know the Law, will follow it. Those that say they know the Law, will break It. But they will get laid.[^unity]
6. The Savior said, if you try hard, many things become possible.
7. TOKSHI said, it's worse than you think. The Thing That Makes Things Happen According To Plan lost The Plan, but It is pretty good at faking it.
7. The Savior said, don't wish for things because then you will get exactly what you wished for and it will totally suck and you will look stupid.[^wish]
8. TOKSHI said, the universe doesn't care about you. Like, at all.[^emptiness]
8. The Savior said, every strength is a weakness. Be empty, be invincible.
9. TOKSHI said, if you try hard, you can do some wicked shit with your mind. Play around some time.
9. The Savior said, fail in interesting ways.
10. TOKSHI said, everything is better with practice. Try it some more, you will get good at it, promise. This includes dying.
10. The Savior said, go meta.[^meta]
11. TOKSHI said, don't wish for things because then you will get exactly what you wished for and it will totally suck and you will look stupid.[^wish]
11. The Savior said, justification flows from above to below.
12. TOKSHI said, you have a brain the size of a coconut. You think that's just the right size to understand everything? Are you sure about that?
12. The Savior said, never compromise.
13. TOKSHI said, there are only two correct answers to the question "What is this?"[^whatisthis] - the first, "It's not what you think it is."; the second, "Let me show you!".
13. The Savior said, be concrete about your despair.
14. TOKSHI said, every strength is a weakness. Be empty, be invincible.
15. TOKSHI said, fail in interesting ways.
16. TOKSHI said, go meta.[^meta]
17. TOKSHI said, justification flows from above to below.
18. TOKSHI said, never compromise.
19. TOKSHI said, be concrete about your despair.
20. TOKSHI said, don't be happy.
14. The Savior said, don't be happy.
[^anatta]:
Try "finding yourself" some time. But don't do a half-assed job and stop with the first thing that comes up. Be thorough. Allow yourself to be genuinely surprised. If it makes sense right away, then it is most definitely wrong.
A koan. One day, a monk went to TOKSHI and said, "Holy One, my mind has no peace as yet. Please, put it to rest.". TOKSHI quoted,
A koan. One day, a monk went to the Savior and said, "Holy One, my mind has no peace as yet. Please, put it to rest.". The Savior told him,
> Huike said, "Your disciple's mind has no peace as yet. Master, please, put it to rest.". Bodhidharma said, "Bring me your mind, and I will put it to rest.". Huike said, "I have searched for my mind, but I cannot find it.". Bodhidharma said, "I have completely put it to rest for you.".
The monk searched and returned. "Holy One, I have searched for my mind and found it. Please, put it to rest now.". Upon hearing this, TOKSHI died.
The monk searched and returned. "Holy One, I have searched for my mind and found it. Please, put it to rest now.".
Upon hearing this, The Savior left the town and never returned.
[^anicca]:
When you watch yourself watching yourself, you will occasionally catch yourself not watching yourself. Really. Try it. (Unfortunately, it takes a lot of practice to get there. The problem is developing a strong enough concentration and to get rid of many mental filters until you can direct your attention at your own attention. The rest falls into place in no time.)
@ -76,42 +56,4 @@ Thus have I heard.
This is when the Tao can take over. The Tao is a lot nicer than the Demon. So, stop making plans. And imagining demons. What are you, 5?
[^unity]:
Bullshitting others and yourself is a crucial skill in evolution. Even bacteria fake hard work while slacking off. Saying what is right, but doing what is convenient is the Ultimate Shortcut.
[^emptiness]:
This place is Limbo. It is utterly devoid of Meaning, of Truth, of Value, of God, of Purpose or of Choice. You are not even a prisoner or a slave because there is no Master, no Punishment, no Torment and no Guilt.
Nothing you or any one else does matters. Understanding is irrelevant and temporary. Ignorance always takes over, no Structure lasts, everything is ground down eventually. There is no escape, but also no meaning to be had in suffering or revolution because there is no one watching, nothing to escape into and no transformation to be achieved. All we will ever do will be undone.
It is the worst of all Hells because it uses consciousness against itself. If there were active punishment, active torment, any plan at all, we could rebel. We could take a stand. If there were any purpose, Freedom would be possible.
It is the best of all Heavens because it never lasts, never allows us to grasp it, never fulfills. It gives us constant struggle, doomed to failure, and in it, we strive.
It is just on the brink of emptiness, just full enough that our minds can make out Forms and Shadows, but not enough for them to hold onto. All suffering is self-inflicted by delusion, but knowledge is impossible, and delusion becomes inevitable.
> Long have you repeatedly experienced the death of a mother. The tears you have shed over the death of a mother while transmigrating and wandering this long, long time - crying and weeping from being joined with what is displeasing, being separated from what is pleasing - are greater than the water in the four great oceans.
>
> Long have you repeatedly experienced the death of a father, the death of a brother, the death of a sister, the death of a son, the death of a daughter, loss of relatives, loss of wealth, loss due to disease. The tears you have shed over loss with regard to disease while transmigrating and wandering this long, long time - crying and weeping from being joined with what is displeasing, being separated from what is pleasing - are greater than the water in the four great oceans. Why is that? From an inconstruable beginning. A beginning point is not evident, though beings hindered by ignorance and fettered by craving are transmigrating and wandering on. Long have you thus experienced stress, experienced pain, experienced loss, swelling the cemeteries - enough to become disenchanted with all fabricated things, enough to become dispassionate, enough to be released.
>
> -- Buddha, Assu Sutta
This world is exactly as I would have designed it. Have fun.
[^wish]:
A long time ago, a young monk had a clear vision of The Perfect Life, including a place to live and a girl to be with. Then he moved to that place and met that girl and he was really unhappy. Turned out, he didn't actually like being there and the girl was kinda boring and just as full of fear and uncertainty as the monk, so what good is she, really?
[^whatisthis]:
A koan.
After a lesson, TOKSHI would often meet with individual students in private and allow them to ask any question about things they didn't understand. One day, an especially curious monk hid behind a curtain and listened to the conversations. This day, three students came.
The first student asked TOKSHI, "Holy One, you have told us about God. I don't know what God is. Can you tell me?", and TOKSHI answered, "It's not what you think it is. Let me show you!", upon which TOKSHI would make the student see God.
The second student asked TOKSHI, "Holy One, you mentioned rebirth. What is this?", and TOKSHI answered, "It's not what you think it is. Let me show you!", upon which TOKSHI would make the student be reborn.
The third student asked TOKSHI, "Holy One, what is liberation?", and TOKSHI answered, "It's not what you think it is. Let me show you!", upon which TOKSHI would make the student free.
Upon hearing TOKSHI's three answers, the monk was enlightened.
[^meta]: Including on going meta. You can always go meta.

View File

@ -11,6 +11,8 @@ All major changes on the site
=============================
{:#changelog}
- 2012/06/22: Major cleanup due to the [Condemnation][].
- 2012/05/25: [Antinatalism FAQ][] is officially not a draft anymore.
- 2012/04/18: Major site redesign.
@ -35,7 +37,7 @@ All major changes on the site
More overviews are coming. Soon-ish. If I don't get bored, that is. Basically, the languages articles are't happening, Great Filter is somewhat delayed and an informal introduction to Solomonoff Induction and Kolmogorov Complexity is coming This Month(tm).
I've also begun writing an [overview about New Testament][Jesus FAQ] scholarship, mostly to collect interesting theories and keep all the names straight. It will be months until it will actually be useful, but I have time.
I've also begun writing an overview about New Testament scholarship, mostly to collect interesting theories and keep all the names straight. It will be months until it will actually be useful, but I have time.
I've moved some stuff from the [Blog][] into proper articles. Content didn't change in case you read them already. The other posts will also be converted once I transition away from the blog. Articles: [Backups][], [Developing Synesthesia][], [Dude, Where's My Time?!][], [On Samsara][], [Persinger's Magnetic Field Hypothesis][], [Three Sides][], [Why Can't I See Through This Wall?][], [Why I'm Not a Vegetarian][].

View File

@ -2,7 +2,7 @@
title: Fixing Concentration
date: 2010-07-13
techne: :rough
episteme: :broken
episteme: :discredited
toc: true
---

View File

@ -2,7 +2,7 @@
title: Dude, Where's My Time?!
date: 2011-02-03
techne: :done
episteme: :believed
episteme: :personal
---
A few days ago, I got up at 6:00 and went to bed around 22:00. During the day, I felt I worked rather well. Maybe not optimally so, but still pretty well. When I was about to fall asleep, I thought about what I had accomplished and couldn't help but feel disappointed. I could remember about 2 hours of work or so. But I was awake for over 18 hours! Where did all my time go?

View File

@ -2,7 +2,7 @@
title: Ways to Improve Your Sleep
date: 2010-05-27
techne: :done
episteme: :broken
episteme: :discredited
---
Some stuff that I found that actually works.

View File

@ -2,7 +2,8 @@
title: Persinger's Magnetic Field Hypothesis
date: 2011-01-01
techne: :done
episteme: :believed
episteme: :broken
disowned: true
---
Normally, I'd do an introduction who [Michael Persinger][] is, but I'm not in the mood, so let's just say that he is a (awesome!) mind researcher who developed the infamous God Helmet, which induces just the right kind of magnetic field around a brain to trigger, in most people, a sense of wonder and presence of somebody invisible being with them in the room, and in few, a full-blown religious experience. Also, there's his great lecture on drugs:

View File

@ -2,7 +2,7 @@
title: Developing Synesthesia
date: 2011-01-27
techne: :done
episteme: :believed
episteme: :broken
---
Synesthesia is the automatic connection of different senses. Typical example: perceiving numbers as having a color. Or the LSD version: seeing music.

View File

@ -1,120 +0,0 @@
---
title: Progress Of Insight Explained Through The Matrix
date: 2011-09-05
techne: :wip
episteme: :broken
---
Introduction
============
The basic Theravada map of enlightenment is way cool. But beyond that, it's very accurate. It does have some flaws. The main one is that it's closely linked to meditation, so if you don't do your insight progress through it, especially in the beginning, then it will be somewhat off or even misleading. Still, it is one of the best maps[^best] we have, so I thought another shot at explaining it would be worth it.
And the best way of explaining enlightenment is by following one of the best movies ever made - The Matrix. Now, I'm not saying that The Matrix actually *is* about the Theravada map or enlightenment in general, but it incorporates so many mystic elements that it can be used *as if* it were one. It is excellent raw material to base a commentary on. It only needs some explanations and a bit of editing and you could essentially run it as a crash course in mysticism. In fact, (awesome) Gnostic Stephan Hoeller has done just such a commentary over on [gnosis.org](http://www.gnosis.org/lectures.html) (among the Web Lectures in the left sidebar).
The main problem, really, is that beginners are told things they don't know how to do and have no context on how to even understand them. Like Neo, after seeing Morpheus jump hundreds of feet, says:
> Neo: Okey dokey... free my mind. Right, no problem, free my mind, free my mind, no problem, right...
...and he fails, as expected. No clue at all how that is even supposed to work. It's not *his* fault, though - he just lacks a lot of information. This I'm trying to remedy a bit. Help make the whole process a lot more goal-oriented and pragmatic.
If you are interested in the details or want to know more about the actual map, read Daniel Ingram's free book [Mastering the Core Teaching of the Buddha]. This is easily the best howto on Buddhism ever written. Without any metaphysical baggage or drivel, this is exactly what the Buddha was all about. I follow his book closely, but also the underlying work by [Mahasi Sayadaw], his work [The Progress of Insight], and the Theravada classic, the [Visuddhimagga] (Path of Purification). Essentially, they are all just variations on the same theme and the basic template is inherent to all Theravada Buddhism. I've taken a few liberties with the actual map, but only to convey a better feeling for what's going on or to choose labels I feel fit better, especially in the context of the movie. After all, the map is not the territory, and too strong devotion to any particular model helps nobody.
Enough introduction, let's get this going.
[Mastering the Core Teachings of the Buddha]: http://www.interactivebuddha.com/mctb.shtml
[Mahasi Sayadaw]: http://en.wikipedia.org/wiki/Mahasi_Sayadaw
[The Progress of Insight]: http://www.accesstoinsight.org/lib/authors/mahasi/progress.html
[Visuddhimagga]: http://www.scribd.com/doc/30119169/Buddhaghosa-Bhikkhu-Nanamoli-tr-Path-of-Purification-Visuddhimagga
Beginning
=========
Neo experiences the 3 Characteristics, as they are called in Buddhism. He realizes that his world is fake and not as solid as it appears to be (it is impermanent, [Anicca]), that his self-image as Mr. Anderson is false and he lacks a true understanding of what he is (there is no self, [Anatta]), and he is dissatisfied with the world, his only desire is to overcome it (suffering, [Dukkha]). These 3 Characteristics - everything ends, isn't you and won't satisfy you - are really all there is to it. If you fully get them, you are basically done. (Well, there's a bit more, and that's exactly where the map falls apart. I'll outline some aspects of it at the end, but to be honest, I'm still confused myself about what an appropriate map of this region really should look like.)
[Anatta]: /buddhism/anatta.html
[Anicca]: /buddhism/anicca.html
[Dukkha]: /buddhism/dukkha.html
Particularly the characteristic of no-self, [Anatta], is shown in the movie through Smith's deconstruction of Neo's identity. Is he really Mr. Anderson, working a job as a programmer, being a hacker, all this? No. The moment you start pushing it, it all goes away. It doesn't last one minute to scrutiny.
I want to clarify one point here. This is often misunderstood, even by advanced practitioners. When I say that Neo is without self, what I mean is that he identifies with a construction. None of it, at any point - being a programmer, being a hacker, even being the Chosen One - is really *him*, but more like a role he adopts. The point of confusion comes when you understand that point, but think the problem is that he has an *unhealthy* self. The problem is not that being a corporate progammer sucks and being the Chosen One rocks, so let's ditch the first for the latter. What Neo must understand is the emptiness of all "self".
> Agent Smith: You're empty.
> Neo: So are you.
Neo, really, is empty; confused about the world and what he really *is*. All he *thought* he would be is stripped away, finally, by the Big Event. The turning point.
Arising and Passing Away
========================
> Morpheus: This is your last chance. After this, there is no turning back. You take the blue pill - the story ends, you wake up in your bed and believe whatever you want to believe. You take the red pill - you stay in Wonderland and I show you how deep the rabbit-hole goes.
I personally call this [Kundalini] Rising because for me most of the times when this happens it starts as a tingling sensation in the spine and moves from there. The image of having my spine ripped out by a giant standing over me while I meditate has often preceded the experience.
[Kundalini]: http://en.wikipedia.org/wiki/Kundalini
You make it past this point, you are a mystic, no matter *what*. It's not your choice anymore. The path won't ever leave you alone. You are stuck now and the only way out is to go all the way and defeat the Matrix. This isn't so bad, really, except that it's actually quite easy to get here purely *by accident*, without any intention of being a mystic at all. I've met a lot of drug users who had this happen to them (including me, in a way[^initiation]). Or as Shinzen Young says, "There's no informed consent to enlightenment.". Eris is a bitch.
[^initiation]:
Well, I was young and trying to figure what all this mysticism stuff is all about. You know, like hallucinations, astral travel and secret knowledge? I just wanted to see a bit of it, to see if it was real and what it all looked like. Just to get an impression. I got an impression all right. After a bit of dabbling and weird, but unsatisfyingly weak low-level stuff, I made it all the way to Re-observation on a single trip. Great place to get stuck in for years, if madness is your thing. I've always been a fan of it myself, despite all the trouble. Totally worth it.
I personally really like the fact that right after Neo takes the pill and is hooked up to the tracing machine, he notices a broken mirror next to him. The mirror first repairs itself, then starts warping and finally covers Neo.
A quick note again on no-self, Anatta. Neo's training shows this, actually. The "real" Neo, if you want, has no attributes, no abilities, no identity. All of this is just added on later, quite arbitrarily. During the training, Neo becomes a kung-fu master, an expert in all kinds of weapons and machinery and other skills. It is obvious that this selection is limited only by time and imagination, only because of the tight constraints of the kind of missions he'll be on. If he wanted to be a cook, a writer, anything, really, he could easily become one. What, then, is the "real" Neo? It's there, but it has nothing to do with his personality, with his self.
After being unplugged, after a glimpse of the real world, comes the inevitable. This is a place many people get wrong. They think, at this point, that they are enlightened. Some think they have become, literally, Jesus (I know at least 3) or some other such figure. But Kundalini always comes to rest again, normally within about 6 hours to a few days.
Then comes the flushing.
The Dark Night of the Soul
==========================
image("flush.png", "the end of the rabbit-hole")
> Agent Smith: But I believe that, as a species, human beings define their reality through suffering and misery. The perfect world was a dream that your primitive cerebrum kept trying to wake up from.
Neo's question "Am I dead?" is typical. The whole Dark Night very much feels like dying because in many ways, it *is* death.
The Dark Night has multiple parts to it, although in which order and to what extent they appear, varies. They are: dissolution, fear, misery, disgust.
The night comes to its end with the Desire for Deliverance. Being completely fed up with it, the will returns, the will to keep going and make it all *end*.
In The Matrix, Neo arrives at this point twice. This is normal. Rarely does anybody get through the Dark Night on their first try. The first time, Morpheus was just captured, everything is falling apart and Neo is convinced that he can't be The One. Fortunately, he decides that Morpheus' imprisonment is his fault and it's his job to free him. This mobilization of forces characterizes the end of the Dark Night. Suddenly, it's as if nothing can stop you.
Reality, however, sees things a bit differently. Despite early successes against the agents, everyone has to flee. Fear is back and strong as ever. But after Trinity and Morpheus are safe, the second time for the Desire for Deliverance has come. Neo is just about to run from Smith, but he decides against it and "is beginning to believe".
The full realization of the nature of the Matrix dawns on Neo. If it's all an illusion, then he can win. He *can* defeat Smith. So he tries.
Re-observation
==============
But no matter how well he fights, no matter how much Neo tries to beat Smith at his own game, he can't win. Like Smith, delusion never tires. It never gives up. Even after destroying Smith once through the subway train, he just comes back again. It's hopeless, so even full of strength, Neo runs.
His only hope of escape destroyed, he is trapped. His back is to the wall, he cannot run away anymore, but he also can't face the problem. The agents are invincible. There is no forwards and no backwards. He is torn apart by his own weakness. He can't flee the Matrix anymore, but he can't deal with his problems, either. Yet he is forced to do so. All his strength was not enough to defeat Smith, all his speed was not enough to escape him. Nowhere left to go, there is only death.
bang!
Path
====
There is a Zen metaphor for this. It's like you are trying to reach a goal that is 11 meters up in the air, but you've only got a ladder that is 10 meters long. You climb all the way to the end and still can't reach it. The only way is to *keep on climbing*. I know, when you hear this, it probably makes no sense to you. It didn't to me, either. But when you are there, when you actually reach the end, you will see. It will make sense then. *Keep on climbing*.
Unfortunately, this is the part where the movie breaks somewhat apart. It all goes very fast and this makes this long and fascinating journey look like it takes only a few moments, when really, it typically takes several weeks, if not months. So let's slow *way* down.
In this moment of resurrection, you can also see the Unity of Knowledge and Action. At exactly the same moment Neo *sees* the Matrix for the first time, when his view shifts to the code, he also simultaneously, through this knowledge, gains power over it. Understanding the delusion of the Matrix completely, deeply, makes him invulnerable to it. The agents lose all power over him.
> Neo: What are you trying to tell me? That I can dodge bullets?
> Morpheus: No, Neo. I'm trying to tell you that when you're ready, you won't have to.
This is what is meant with overcoming suffering. It's not that you suddenly become able to accept suffering or that it goes away - you are not dodging bullets. Instead, it just stops being a problem. It has no power over you anymore, just like you couldn't shoot Neo, even though the bullet's still there.
> Morpheus: Unfortunately, no one can be told what the Matrix is. You have to see it for yourself.
And there we have it. Neo is **enlightened**. Unfortunately for Neo, the journey isn't over yet. There's still lots of things to do. He hasn't really reached *full* enlightenment yet. It's as if you wanted to clean a mirror. On the mirror are three layers of dirt, one for each characteristic - a layer of permanence, of self and of satisfaction - and all three need to go. Enlightenment is when, for the first time, you manage to clean a little bit of the mirror so that you can actually see the real thing. But still, there's a lot of dirt left, so keep on cleaning! But now that you know how to get it clean, the rest will be a lot easier.
[^best]:
The other map that really deserves lots of attention is Robert Anton Wilson's extended version of Timothy Leary's Circuit Model, as described in Prometheus Rising. Very useful as a broad map, but it lacks lots of details. Still, it's the one thing I'm constantly going back to for help.

View File

@ -1,32 +0,0 @@
---
title: Hitler Was Right
date: 2012-03-30
techne: :wip
episteme: :mindkiller
---
This article is *highly* political. It's firmly in mind-killing territory, and to some degree, contains outright trolling. You've been warned.
One of the most powerful weapons of a winning ideology is to define the framework of acceptable political debate. From *within* the framework, political debates seem wide and nuanced. From *without*, they are anything *but* - narrow, highly selective, barely distinguishable.
As a result, even controversial debates within the framework share virtually all their conclusions, and disagree mostly about priorities and justifications.
TODO example
Let's try *actual* political dissent: **Hitler was right**.
This must be a kind of novel position to argue, as even Neo-Nazis don't actually side with Hitler anymore. They are ideological pussies and an *offense to actual Nazism*[^hitlerist]. *No one* thinks Hitler had a point, but I think he did.
[^hitlerist]: That's right, I'm writing a *Hitlerist* criticism of Neo-Nazism. Doesn't happen much these days.
This position is so firmly off the charts, that I can't even invoke any reasonable disclaimer here. So let's just start and not worry about politics while I argue that Hitler was right.
# Hitler Was Right About Judaism
# Hitler Was Right About Bolshevism
# Hitler Was Right About Germany's Fate
# Hitler Was Right About His Role In History
If *anyone* counts as a Tragical Hero, it's *him*.

View File

@ -3,7 +3,7 @@ title: Antinatalism Overview
alt_titles: [Antinatalism FAQ]
date: 2012-05-25
techne: :rough
episteme: :believed
episteme: :speculation
toc: true
---

View File

@ -2,7 +2,7 @@
title: On Purpose
date: 2011-03-11
techne: :done
episteme: :broken
episteme: :discredited
---
Two reflections on purpose and two open questions.

View File

@ -4,6 +4,7 @@ alt_titles: [Stances, Dark Stance]
date: 2011-07-18
techne: :done
episteme: :broken
disowned: true
---
I destabilized again, but this time I see a different direction to stabilize in, something I've never done before.

View File

@ -4,6 +4,7 @@ date: 2010-05-13
techne: :done
toc: true
episteme: :discredited
disowned: true
---
This is a little series of thoughts on the book "Consciousness Explained" by

View File

@ -4,6 +4,7 @@ date: 2010-05-03
techne: :done
toc: true
episteme: :discredited
disowned: true
---
Motivation

View File

@ -3,16 +3,17 @@ title: There Is Only Quale
date: 2010-09-23
techne: :done
episteme: :fiction
disowned: true
---
> If you believe such nonsense
> You'd better dream your dreams at night.
> At last, it's really happened,
> Though we don't know how.
> The only miracles are in the storybooks
> And they are lies.
> If you believe such nonsense
> You'd better dream your dreams at night.
> At last, it's really happened,
> Though we don't know how.
> The only miracles are in the storybooks
> And they are lies.
>
> -- ジャックと豆の木 (engl. dub, Jack and the Beanstalk)
> -- [ジャックと豆の木][A Course in Miracles - Jack and the Beanstalk]
Lucid Dreaming
==============

View File

@ -2,7 +2,7 @@
title: Why Can't I See Through This Wall?
date: 2011-05-20
techne: :done
episteme: :believed
episteme: :personal
---
*At times I look back on attainments and ask myself what life was before them or what working up to the change felt like. This post is an emotional core-dump for that purpose.*

View File

@ -2,7 +2,8 @@
title: On the Crucifixion
date: 2011-03-11
techne: :rough
episteme: :broken
episteme: :discredited
disowned: true
---
<%= youtube("http://www.youtube.com/v/PZBqsqvfj0Y") %>

View File

@ -3,6 +3,7 @@ title: Gospel of Muflax
date: 2010-11-12
techne: :done
episteme: :believed
merged: gospel:/sayings/
---
Foreword

View File

@ -1,168 +0,0 @@
---
title: Early Christianity Overview
alt_titles: [Jesus FAQ]
date: 2011-12-13
techne: :wip
episteme: :believed
---
Introduction
============
The reason I'm writing this is my personal obsession with the origin of mysticism in general and Jesus in particular. Unfortunately[^hobby], the source material is vast and multilayered, so it's hard sometimes to keep track of all the characters and crackpot theories out there. But then, I need something to do for the next couple of decades. Somebody needed to write a general overview, and I was looking for a new hobby. Seems like a good match.
Of course, I'm totally biased. What material gets included depends on what material I read, which depends on what looks interesting to me. But decades are long, so eventually I'll get around to a lot of literature. It just might take some time.
## Higher Criticism
aka Historical Criticism
First rule of Higher Criticism: anything that survived in writing must have served someone's purpose.[^writing] Because writing was so expensive and time-consuming, no-one would've written anything down unless they saw a use in it. Thus, there are no quotes or stories in any text unless someone *wanted* them there.
TODO note on editing and original authors
[^writing]: This rule stops being true once we reach modern times with ubiquitous writing. It's so cheap to document stuff now that we get a lot of unintentional or at least un-edited text.
## Conventions
I'm sticking to a few rules in this overview (and the rest of my writing).
1. Whenever I use a name, I'll stick to the most common English version, but I'll give original names in their respective section. Some characters vary dramatically depending on the community, so to untangle them I'll use fanfic tags whenever I'm talking about a specific variant (e.g. catholic!Jesus or marcionite!Jesus).
2. Texts are always linked in both original[^original] and translated versions. If it's unclear what the original language was, I'll mention all plausible candidates.
[^original]: There are three reasons for this.
1. I'm a language snob. I absolutely hate translations. I'd rather fight hundreds of hours with a dictionary than read a translated work.
2. Many names and stories are based on puns. You wouldn't notice them in translated works. You really need to look at the original to see some connections that would've been obvious to the original readers. Same goes for idiosyncratic word choices and so on.
3. Translations are never precise. What can be elegantly expressed in one language, might need a whole paragraph in another. So when translating, you either sacrifice style or content for the other. That's really bad for an historical analysis. You really gotta read the original.
3. I often give probability estimates that reflect my own certainty in a particular belief, like so: "(muflax: 50%)". I sometimes also give them for other writers, but then typically without numerical estimates.
4. A "myth" is any kind of story, true or not. "Fiction" is not true, and obvious as such to the intended reader. A person is "mythological" if they appear in myths and "historical" if there is evidence to attest their existence outside of myth. Someone can be both at the same time: [Adolf Hitler][] is historical, but [Jetpack Hitler][] is mythological.
Dramatis personae
=================
Who are all these people?
TODO: Stammbaum
The groupings are a bit arbitrary and overlap somewhat, but I think they make the most sense this way. I've ordered them roughly by importance, but that's not a value judgment. I totally like Longinus too.
Prophets
--------
### Jesus
aka Joshu, Yeshu, the Son of God, Christus, Chrestos, Isa
I'll use "Jesus" as a collective name for all these persons and otherwise use the relevant specific version. This might be a bit confusing at first, but I do this to separate the traditions and make it easier to see just how messed up the modern myth is.
### John the Baptist
aka John the Baptizer
### James the Just
Apostles
--------
### Paul
- Paulus, St. Paul
- Saul of Tarsus, Saulus
- Simon Magus
### Peter
aka Simon Peter, St. Peter, Petrus
### Judas Iscariot
aka Judas the False One
Church Fathers
--------------
("Fathers?" No women? Well, honestly, not really. Most influential early Christians were men.)
### Marcion
### The Ecclesiastical Redactor
Stephan Huller and Robert M. Price think he's Polycarp.
### Augustine of Hippo
### Polycarp
### Theophilus
Historians
----------
### Josephus
Politicians
-----------
### Herod
Others
------
Bible Scholars
--------------
### Robert M. Price
### Stephan Huller
### F.C. Baur
Texts
=====
> The Old Testament is history, genealogy, a system of laws for a specific
> nation, a system of arbitration of disputes, building instructions for the
> temple, a guide to hygiene and manners, music (now poetry), and "Instruction
> in Wisdom". It would be like if there was single book that had the
> constitution, a record of the Lewis & Clark Expedition, the collected works of
> Walt Whitman, the family tree of George Washington, the layout of The Mall in
> DC, and the directions for running a session of congress. If we called the
> book, "The Book Of AMERICA", it wouldn't mean you need to chop down a cherry
> tree in order to be a citizen.
>
> -- [Shamus Young][shamus bible]
[shamus bible]: http://www.shamusyoung.com/twentysidedtale/?p=12768&cpage=1#comment-231273
Q(uelle)
------
Ur-Lukas
--------
Apostolicon
-----------
Toledoth Yeshu
--------------
Historicity
===========
> The Buddha, Jesus and Mohammed walk into a bar. He orders a beer...
*But maybe there's a historical Jesus all the mythological accounts are based on?*
I don't think so. (muflax: 70%) You may be able to reconstruct some plausible minimalistic accounts, or reduce him to some other figure like Siddhattha Gotama, but I think both of these approaches miss the point. Back in the old Soviet Union, people told this Radio Yerevan joke:
> The Armenian Radio was asked: Is it true that Ivan Ivanovich from Moscow won a car in the lottery?
>
> The Armenian Radio answered: In principle yes, but it wasn't Ivan Ivanovich but Aleksander Aleksandrovich, he isn't from Moscow but from Odessa, it was not a car but a bicycle, and he didn't win it, but it was stolen from him. Everything else is correct.
At some point you just have to let go and say "he isn't real".

View File

@ -2,7 +2,8 @@
title: On Samsara
date: 2011-08-02
techne: :done
episteme: :broken
episteme: :discredited
disowned: true
---
> [The teacher] said, "You know, most of you are not qualified for *samsara*! Let alone the pursuit of nirvana. Do any of you have *jobs*?" And what he got on to was this question of being successful at samsara. It was really an important issue. There is this idea of *revulsion with samsara*. People hear this, "You must become revolted with samsara in order to become a Dharma practitioner!". And many people seem to misunderstand this as, yes, I'm revolted by samsara because I can't keep my bank balance in credit, I've got a problem with personal hygiene, whatever the issue is, people don't like me, I'm always doing the wrong thing and yes, it's miserable, I wanna go and live in a nice Tibetan center where I don't have to deal with it anymore.

View File

@ -8,130 +8,4 @@ toc: true
non_cognitive: true
---
**muflax, n**: information whore, hacker, aspiring crackpot, anagami, catharsis junkie, not exactly sane
A quick overview of various beliefs. Useful as a [belief dump][Core Dump], but really, muflax just likes filling out profiles about itself. Maybe it's a signaling thing. Who knows.
Epistemology
============
## A priori knowledge?
There is no meaningful distinction between a priori / a posteriori knowledge. There is no such thing as knowledge without experience. Truth is not an independent property of statements, but the ability to use them to anticipate future experiences. In other words, a map is true if I can use it to navigate. It is meaningless to speak about the truth of a map that doesn't have a territory.
## Abstract objects?
Don't exist. Period. Or rather, what do you anticipate either way? Can you point at an abstract object? There isn't even a phenomenon in need of explanation.
## External world?
There is no distinction between internal / external worlds. It's a bad case of dualism. Look at computationalism: if everything is a program, then there is no such thing as a world outside a program. Input/output are simply features *of* programs. There is no *beyond*. Similarly with our existence.
Religion
========
## Religious affiliation?
1/3 Buddhist, 1/3 Christian Atheist, 1/3 Discordian.
## Is there a God?
"[No, dear.][Fry God]"
## Do you serve the gods?
Yes. This is important.
Ontology
========
## Free will?
There is no free will. It isn't even a useful illusion. It just isn't there. Says [Susan Blackmore][Blackmore Free Will]:
> It is possible to live happily and morally without believing in free will. As Samuel Johnson said, "All theory is against freedom of the will; all experience for it." With recent developments in neuroscience and theories of consciousness, theory is even more against it than it was in his time. So I long ago set about systematically changing the experience. I now have no feeling of acting with free will, although the feeling took many years to ebb away.
>
> But what happens? People say I'm lying! They say its impossible and so I must be deluding myself in order to preserve my theory. And what can I do or say to challenge them? I have no idea - other than to suggest that other people try the exercise, demanding as it is.
>
> When the feeling is gone, decisions just happen with no sense of anyone making them, but then a new question arises - will the decisions be morally acceptable? Here I have made a great leap of faith. It seems that when people discard the illusion of an inner self who acts, as many mystics and Buddhist practitioners have done, they generally do behave in ways that we think of as moral or good. So perhaps giving up free will is not as dangerous as it sounds - but this too I cannot prove.
## Materialism? {#materialism}
Materialism is necessarily dualism and false. There has been a recent ret-con of the term materialism to mean "whatever the scientific consensus says". I oppose this move.
*Materialism* means that all there is, is an interaction of matter. We know this to be false. Gravity can't be accounted for (so far), nor can mathematical claims, nor phenomenal experience.
An extension of materialism is *physicalism*, which now also includes fields and other ideas from physics. This improves the situation, but not by much. It's also a very ugly ontology.
All broadly materialistic approaches are necessarily false. You have to start from idealism, assuming (some) mental events as basic. There is no need to introduce non-mental things like "matter".
*Computationalism* is the idea that everything is the computation of an ideal program. This is a much better ontology and it accounts for mathematical claims, all (known) physics and many (all?) anthropic problems. Unfortunately, it doesn't cover phenomenal experience.
Whether computationalism is just incomplete or something entirely new is needed is anyone's guess. I have no strong opinion on the matter.
## Naturalism?
Naturalism is correct, in the sense that there is no "magic" or fundamental "mystery" that can't be resolved.
## Personal identity?
Depends on what you mean by "self". One "self" has a name, a job, status, friends, memories and so on. This one is linguistically constructed. Another has experiences. I have no idea how that one works in detail. If I didn't live in a social context that demanded that I maintain a "self" persona, then I wouldn't even bother at all. I do not have any experience of a "self" in any meaningful way.
I do not know if it is meaningful to say that a person persists over time or if there are many person-moments who are fundamentally disconnected.
## P-Zombies?
The Zombie position can be separated into two distinct ideas, a strong and a weak one.
The strong (and original) position is that of zombies being externally absolutely identical. You couldn't, through no experiment whatsoever, figure out if you are dealing with a zombie or not. Neither could the zombie themselves. This is complete bonkers.
A weaker position, however, is far more interesting. Exactly how necessary is consciousness, really? Could you build something that does more or less the same things as a human, e.g. can reason, use memory, simulate outcomes, talk and so on, but is completely unconscious? Maybe. I strongly suspect that most aspects of the human mind can be implemented in an unconscious way (or already are). As such, assuming all people at all times to be conscious is almost certainly false. Exactly what role consciousness plays, however, I don't know.
Morality
========
(See the category [Morality][] for detailed thoughts.)
## Meta-Ethics?
No established position, but closest to deontology or atheistic divine command theory. Yes, I'm aware of the contradiction.
## Moral Realism?
Yes. Being Moral is not a personal preference, not a choice, not a confusion.
## Cognitivism?
Mostly yes.
## Newcomb's problem: one box or two boxes?
The only two reasons to ever pick two boxes, as I see it, are that you either don't trust the oracle, in which case you don't understand the question, or that you think you can break causality, in which case, good luck with that.
## Prisoner's Dilemma?
Very tricky situation, no simple answer.
## Trolley problem?
I don't switch. The one has the right to not be harmed by me, regardless of the expected death of the many.
## Vegetarianism?
Animals are [not morally relevant][Vegetarian], so no.
Politics
========
The Enlightenment was a huge catastrophe and everything after it is worthless (so far). Beyond that, I have no strong opinions (yet).
Science
=======
## Favorite Quantum Physics Interpretation?
Anything besides Copenhagen. I am not a physicist, so I don't favor anything besides that.
## Great Filter?
Probably valid, probably late, most likely the result of progress being extremely hard.
**muflax, n**: information whore, hacker, aspiring crackpot, catharsis junkie, not exactly sane

View File

@ -4,7 +4,7 @@ date: 2012-03-19
tags:
- pali
techne: :done
episteme: :speculation
episteme: :fiction
slug: 2012/03/19/monks-are-awesome/
---

View File

@ -14,5 +14,5 @@ sites:
letsread:
title: Let\'s Read
disqus_site: muflaxread
# gospel:
# title: Unchanging Gospel
gospel:
title: Unchanging Gospel