cleaned up drafts

master
muflax 2012-05-01 17:59:38 +02:00
parent 97ba3ca9da
commit 4301d762d4
12 changed files with 0 additions and 814 deletions

View File

@ -1,70 +0,0 @@
---
title: Becoming the Unchanging
date: 2012-04-15
tags:
- acausal
- buddhism
- cessation
- consciousness
- morality
- rationality
- yangming
techne: :wip
episteme: :speculation
---
<em>I'm mildly afraid to talk about my thoughts. The moment I present an idea, I begin to strongly believe it. This is of course how evangelism works - its purpose is to convince the missionary, not the heathen. Writing about it, though, doesn't seem to cause this. It forces me to address any holes and assemble a coherent (enough) idea, but often fails to trigger integration. I can write about certain ideas for ages without ever adopting (or rejecting) them. But sometimes, talking about an idea finally causes <em>decompartmentalization. This is an attempt to trace a recent one. </em></em>
So I was having a short discussion about acausal trade. Acausal trade is the idea that agents can cooperate despite not having a direct causal link, but by sharing a decision algorithm. The classical example is <a href="http://en.wikipedia.org/wiki/Newcomb%27s_paradox">Newcomb's box</a>, i.e.:
<blockquote>A person is playing a game operated by <em>the Predictor</em>, an entity somehow presented as being exceptionally skilled at predicting people's actions. [...] The player of the game is presented with two boxes, one transparent (labeled A) and the other opaque (labeled B). The player is permitted to take the contents of both boxes, or just the opaque box B. Box A contains a visible $1,000. The contents of box B, however, are determined as follows: At some point before the start of the game, the Predictor makes a prediction as to whether the player of the game will take just box B, or both boxes. If the Predictor predicts that both boxes will be taken, then box B will contain nothing. If the Predictor predicts that only box B will be taken, then box B will contain $1,000,000.
By the time the game begins, and the player is called upon to choose which boxes to take, the prediction has already been made, and the contents of box B have already been determined. That is, box B contains either $0 or $1,000,000 before the game begins, and once the game begins even the Predictor is powerless to change the contents of the boxes. [...] The only information withheld from the player is what prediction the Predictor made, and thus what the contents of box B are.</blockquote>
Of course, your best strategy is to take only box B. You don't strictly need acausal trade for that because you could just run a simple simulation yourself: in one case, 100 agents all take both boxes, in the other, 100 agents take only one. Because the Predictor is so good, most of the 100 first agents would end up with 1,000$, but most of the latter agents win big. So statistically, it's best for you to adopt the one-boxing strategy.
But imagine <em>both</em> boxes were transparent.
Now, you might think that this is trivial. If B is empty, you'd be crazy to one-box. But think about it. How does the Predictor actually make the prediction? What if it runs a simulation? What if <em>you</em> are this simulation? That would mean that your decision now would influence the result your second instance would face. If you two-box given an empty box, then your second self will also get an empty box. This is not profit-maximizing!
It doesn't really matter how the simulation is actually implemented. Maybe the Predictor is a powerful AI and runs an approximation of you as a subroutine. Maybe its a mad scientist who fires a tranquilizer gun at you and wipes your memory after the first run.
But, and this is were acausal trade comes in, let's go back to the first scenario, with an opaque box. You don't know if, maybe, you're the simulation. But you sure wish that <em>if</em> you were, you <em>would</em> one-box. So you would really want to influence another agent, despite not being causally able to do so. You can't communicate with this other agent directly. However, there is a way - because you share a decision algorithm!
Whatever process you use to make decisions, the other agents uses the same one (or something very close to it). So whatever decision <em>you</em> come up with, they will too. This means that simply deciding <em>now</em> that you would one-box, given the choice, is enough. (Even if both boxes are transparent!) As long as this decision is sincere, the other agent will decide the same way and you will benefit. You just engaged in acausal trade.
Anyway. Acausal trade doesn't necessarily have to happen between agents with identical decision algorithms, they merely need to know each other well enough to predict outcomes based on their own decisions. But that's not really my point.
The point is that a general principle that follows from this insight is that agents should act as if they controlled all instances of themselves simultaneously. This is, in a way, a variant of the <a href="http://en.wikipedia.org/wiki/Categorical_imperative">categorical imperative</a>:
<blockquote>Act only according to that maxim whereby you can, at the same time, will that it should become a universal law.</blockquote>
However, instead of having to make the assumption all agents <em>in general</em> will be controlled by you (they won't), you merely assume control over agents with similar decision algorithms or decision algorithms that depend on you.
One day in a supermarket, I wished I had a coherent self. I thought, it would be really neat if I could embrace this view. My food preferences would be consistent. I wouldn't want one kind of food and buy another. I would make the same decision for all my instances, so I could easily decide against unhealthy but tasty food. I would have an overarching self of consistent choices. That'd be awesome.
And I began to think, how large is this self? Who, exactly, am I making decisions <em>for</em>? When I discussed this, I went, obviously all future instances of yourself are part of it. Which means that any kind of <a href="http://en.wikipedia.org/wiki/Temporal_discounting">temporal discounting</a> is a bad idea. You should always act as if the consequences of your actions applied <em>right now</em>. In other words, your actions should be consistent over time. This is, of course, part of what Wang Yangming was talking about with regards to moral truths. Once you understand a (moral) principle, you can't <em>not</em> act according to it. A view I myself have <a href="http://blog.muflax.com/2011/03/02/on-studying/">endorsed</a>.
But when I tried acting like this, I realized this was ill-defined. Future instances of myself means what, exactly? Anything that self-identifies as muflax? Anything in this body? I couldn't clearly delimit "myself" in any dimension. Then I remembered that it was weird for me to think in terms of a "self" in the first place. I mean, I dissociate heavily. I have this view of myself as a myriad of slices over time, each representing a tiny aspect of this brain being in control, who are all fundamentally independent agents. They sometimes cooperate when goals happen to match, but essentially, muflax(t+1) isn't muflax(t). Even worse, there isn't even a unifying stream of consciousness, there is merely one moment of consciousness now that through memory falsely believes to have a continual existence.
But I didn't fully internalize this view at the time because I thought it had a consequence I didn't want to embrace - <em>long-term selfishness would be incoherent</em>. Or in other words, it would make no sense to say, I do this so I may benefit from it later. muflax(t+1) is as much me as random_person(t+1). Why would I favor one and not the other? The only coherent scope for muflax(t)'s goals is <em>right now</em> and that is it. Which is what the Buddhists have been telling me for a long time. It didn't surprise me that people holding this view don't get anything done - there is no <em>point</em> in getting anything done! Also, universal altruism seems to follow directly from it. Or, as Eliezer says:
<blockquote>And the third horn of the <a href="http://lesswrong.com/lw/19d/the_anthropic_trilemma/">trilemma</a> is to reject the idea of the personal future - that there's any <em>meaningful </em>sense in which I can anticipate waking up as <em>myself</em> tomorrow, rather than Britney Spears.  Or, for that matter, that there's any meaningful sense in which I can anticipate being <em>myself</em> in five seconds, rather than Britney Spears.  In five seconds there will be an Eliezer Yudkowsky, and there will be a Britney Spears, but it is meaningless to speak of the <em>current</em> Eliezer "continuing on" as Eliezer+5 rather than Britney+5; these are simply three different people we are talking about.
There are no threads connecting subjective experiences.  There are simply different subjective experiences.  Even if some subjective experiences are highly similar to, and causally computed from, other subjective experiences, they are not <em>connected</em>.
I still have trouble biting that bullet for some reason.  Maybe I'm naive, I know, but there's a sense in which I just can't seem to let go of the question, "What will I see happen next?"  I strive for altruism, but I'm not sure I can believe that subjective selfishness - caring about your own future experiences - is an <em>incoherent</em> utility function; that we are <em>forced</em> to be Buddhists who dare not cheat a neighbor, not because we are kind, but because we anticipate experiencing their consequences just as much as we anticipate experiencing our own.  I don't think that, if I were <em>really</em> selfish, I could jump off a cliff knowing smugly that a different person would experience the consequence of hitting the ground.</blockquote>
This view, which I previously <em>thought</em> I believed, effectively undermines consciousness-based selfishness. I <a href="http://lesswrong.com/lw/i4/belief_in_belief/">meta-believed </a>that my preferences revolved around my stream of consciousness, going so far as to deny any relevance of phenomena I didn't personally experience. But, and I only got this when working through this, <em>there is no stream of consciousness</em>.
Despite the Buddhists telling me this from day 1. Despite making a great effort to apply meditation techniques specifically designed to understand this. Despite having <a href="http://muflax.com/reflections/quale/">written about it</a>. All not enough to get it. It is one thing to see the visual stream flicker, to see pain waver in and out of existence, but something else to see the sense-of-continued-self disappear. Especially because it is so easy to think of the self as a memeplex, as a psychologically constructed thing, a <em>persona</em>, and to get that this thing isn't continuous or permanent, and then think that realizing this is already the teaching of anatta. It isn't. You can deconstruct all psychology, go into thoughtless trance, and still perceive yourself as having a continuous experience, an ongoing stream of consciousness, a subjective experience spread out over time.
You don't.
I cannot hope to unravel this in writing and won't bother trying, but I can walk through the final process, the analysis that made it click for <em>me</em>. Thinking in terms of algorithms, how does the state transition happen? Does it work something like this: you have one conscious moment now, then the laws of nature are applied, then you have the next conscious moment? What <em>instantiates</em> consciousness? How is a data structure different from running code? If I took all these conscious states and stored them in memory, what would your consciousness look like? How is the order reconstructed from this storage, how do we assemble a stream?
The answer is that we don't. Each conscious moment is self-contained. If you think, but I was conscious before, how do you know that? You have a perception, constructed from memory, that says, "I was conscious before". This <em>feels</em> an awful lot like having been conscious before. But if you believe in a continuation, you are confusing a model of yourself with yourself. So there is only a single conscious experience (and be careful with calling it a <em>moment</em> - time is only a feature of the model). Whenever you try to point to a "self", a subjective sense of existing-over-time, you try to dereference some part of your model to somewhere out of your current perception. In other words, you try to think, "Here is a thought. Here is another. Therefore, I had (at least) two conscious moments, so there is a continuous subjective experience.", but what you really think is the "therefore", which contains simultaneously the perception of remembering having thought two thoughts before. Or in yet another set of words, by thinking this, you confuse the perception of "I remember this" with "I was conscious of this". Try to see yourself as a fully resolved node in a causal graph, not referencing any other node except through causation.[^model]
[^model]: This does not mean that your time horizon is necessarily only <em>now</em>, i.e. that it becomes impossible to plan. Your current blip of experience can certainly contain models of yourself and other things. It is merely your subjective experience that is not spread out over time.
Poetically, the world is destroyed and recreated every instant, each moment-of-consciousness flickering in and out of existence, unconnected, but containing patterns that link them.[^cessation]
<em><em>"In the thinking, just the thought. In the hearing, just the heard. In the seeing, just the seen."</em></em>
[^cessation]: Another consequence of anatta seems to be that the idea of cessation is incoherent. How can you speak of "starting or stopping to exist"? This seems literally incomprehensible. But this is for another time.
So realizing anatta fully, I saw no way to get to a coherent concept of a self-spread-out-over-time, no ideal basis for decision-making. But I really wanted to! It would be fantastic to have this unifying plan, this strong sense of acting-simultaneously-in-time. My optimization power would go way up. It would be the kind of feat I always <a href="http://blog.muflax.com/2011/09/20/a-rationalists-lament/">wanted from rationality</a>. Can it be done?

View File

@ -1,23 +0,0 @@
---
title: Meditation on Hate
date: 1970-01-01
tags:
- dark stance
- meditation
- salvation judo
- vipassana
techne: :wip
episteme: :speculation
---
New article: a [Meditation on Hate]().
Now some commentary.
I have completely fallen in love with the [Dark Stance](). Unfortunately, I'm having a hard time finding people in recent history who have done serious work with it or explored the vast and interesting terrain it offers. There are traces everywhere, but no fully developed path, at least not in any language I understand. However, the further down in history I go, the more prominent the Dark Stance becomes, clearly guiding the old [Cynics]() and various forgotten gods, but history has been filtered dramatically, both by our forgetting and by rival memeplexes trying to erase all competition. But I like the challenge; I always wanted to construct a religion from scratch. It seems I will have to.
Unfortunately, I fear I am losing the ability to show *why* I am so fascinated by the Dark Stance, what exactly it is that draws me in. It is not a reaction to disappointment, not motivated by some negative expectation or personal failure, and certainly has nothing to do with transformation. It derives solely from the immediate emotional experience of awfulness. It just made \*click\* one day, feeling "this is entirely awful" and "this is the right thing to do". There is no justification, no goal, no purpose at all. It just is the right thing to do.
All happiness and its related emotional states, at least as I have experienced them, are fundamentally *betrayal*. They are distractions, always distanced from what I can only call [suchness](http://en.wikipedia.org/wiki/Tath%C4%81t%C4%81/Dharmat%C4%81). I don't like the term either, but I lack a better one. All this talk of beauty, of love, mercy and bliss, over so many years, and it all amounted to nothing, but within pain I finally find clarity. Not peace, mind you, nor surrender. The Dark Stance is entirely dissonant. It devours me, is violent, uncontrollable, but always... *there*. I am in a state of constant agitation, yet I find clarity. I do not know if this is a special property of these states, or just testament to how twisted my mind has become, but I value the experience greatly regardless. As the great [Lepht](http://www.youtube.com/watch?v=a-Dv6dDtdcs) has said, it is not self-harm if it does something.
I find this approach deeply ironic because it is essentially the exact opposite of what I was doing back in my vipassana days. Back then, I spend most of my time sitting in the so-called Dark Night jhanas, mentally curled up in a tight ball of anxiety, trying to make progress, *any* progress. I was throwing more and more energy at the problem, hoping I could at least reach equanimity. I was always disappointed when I had temporarily reached peace-of-mind, only to slide back into anxiety. Now I'm doing the reverse. I have come to *despise* equanimity and actively try to *prevent* any transformation. I want just anxiety, just disgust, just hatred to exist and not *go* anywhere. It is almost effortless. However, I am constantly being pulled *towards* transformation, could very easily go into equanimity, but I refuse. This strengthens my intuition that all mental difficulty is imagined, is really just an adversarial mental process trying to scare you away. Unfortunately for this adversary, I don't care anymore. I do not want the progress it protects anymore. Once you choose Hell over Heaven, Satan loses all importance.

View File

@ -1,27 +0,0 @@
---
title: A Little Requiem to a Successful Suicide
date: 1970-01-01
tags:
- kali
- suicide
techne: :wip
episteme: :speculation
---
Back in early 2010, I already attempted to work through my experiences with mysticism. Some traces of that can be seen in my [writing at the time](http://muflax.com/reflections/con_exp/). [Recently](http://blog.muflax.com/2012/01/03/how-my-brain-broke/), I actually finished this project and found closure. But I noticed an odd thing. Back then, I was still able to work from memory. I could still *feel* what it was like, still had the old persona linger around in my mind.
Not anymore. If it weren't for the notes I made, I would've had a hard time reconstructing the Ayahuasca experience. Some pieces already are a mystery to me and I had to leave them out, could not address them at all. (I faintly remember being in the desert.) In my memory, I can feel a difference between first-person reconstructions and second-hand stories I keep around. They are not strictly separated, of course, but some pieces are *me* and others are just *knowledge about other people*.
What remained of the person who took Ayahuasca is just that now - something that happened to someone else.
What is a person? H. (that person) used to think it was the unbroken stream of consciousness. I don't think I agree anymore. Awareness comes and goes. It is fragmented. Continuity is a useful fiction, one that it is possible to stop telling, if one wants.
A person is patterns, ideas, names. These things are gone now. H. has taken care not to leave many notes, has deliberately destroyed most of them to prevent being revived. He knew I would be interested in him, would search for him. He made it impossible to find him. H. wanted to cease, wanted to die. His connections are gone, his thoughts have stopped, his memories are that of a stranger.
I have found some of H.'s later writing, 2005-06. (I will not link to it.) I don't recognize the person. It's horribly confused, for one thing. It doesn't try to understand anything. It just *claims*. This pattern now annoys me. You present it with an idea and within *seconds* it has a position, knows whether to be for or against it. If you don't see yourself pre-enlightenment, you won't believe how much you have changed.
I am not H. anymore. H. is dead. His memories have faded, and what remains, I don't trust. I know too much about the fragility of memory now, know how unreliable eyewitnesses are. Why should I trust one just because we share a legal identity?
H. had a curious desire. He wanted to die, but also to know what the world would be like once he was dead. I can answer this question now. Out of his ashes, I became flesh, inherited his desires, deal now with his choices. After me, no-one will. I have accepted my responsibility, will prevent further value drift, will not fracture again. In me, incarnation stops.
May Kali devour us all.

View File

@ -1,39 +0,0 @@
---
title: Balancing Your Goals (A Programming Problem)
date: 1970-01-01
tags: []
techne: :wip
episteme: :speculation
---
I use a GTD system called fume (short for "future me") that looks like this:
<a href="http://blog.muflax.com/wp-content/uploads/2012/01/selection-2012-01-24135028.png"><img class="aligncenter size-full wp-image-718" title="selection-2012-01-24[13:50:28]" src="http://blog.muflax.com/wp-content/uploads/2012/01/selection-2012-01-24135028.png" alt="" width="646" height="371" /></a>
([Ruby Gem](https://rubygems.org/gems/future_me). No documentation because fuck you.)
It's driven by a specification file in which I tell it what contexts (projects) I'm working on, what (major) tasks are involved and how much time I'd like to ideally dedicate to each. ([Current version](https://gist.github.com/1670075). The syntax should be self-explanatory, or at least it is if you're muflax.)
Now, of course I know that "2 hours of studying per day" is optimistic. (I only studied 6.5 hours in the last 7 days.) That's not a problem. I use [Beeminder](https://www.beeminder.com/muflax/goals/fume) to force myself to work at least 4 hours/day on *something* (or lose some money). This seems to work reasonably well so far. But this doesn't prevent me from just reading the [Pre-Nicene New Testament](http://www.amazon.com/Pre-Nicene-New-Testament-Fifty-four-Formative/dp/1560851945) all day (and if you look through the data, you can see there are some mostly-theology days). So I need to also balance my projects.
fume checks how many hours I've spent on each context in the last 24 hours / 7 days / 1 month / ever. For each period it then checks how much proportional weight the context has and how much it *should* have, according to the specification. It then calculates how much time I would have to spend on each context to balance it and recommends I work on the one that's the most behind.
Often I follow the recommendation, but sometimes I have additional deadlines or mood swings and just work on something else. Over time, contexts become unbalanced. I forget to actually [read books](https://www.beeminder.com/muflax/goals/books) or program stuff for fun because I [took an overly complicated course](http://blog.muflax.com/2012/01/11/crystallization).
So I also want to beemind how unbalanced my life is and minimize this value. For some reason I've never seen anyone *solve* this problem. (Am I the only one who has it? Do others just rely on a vague feeling?)
One way would be to enforce weekly minimums for each goal, but my weekly total productivity fluctuates a lot. I have weeks during which I get maybe 5-10 hours of stuff done, then others with 80 hours. So this doesn't capture the problem.
Ideally, fume would calculate the minimal effort I'd have to put into each goal to balance it, then add it up and use this as my current un-balance. But calculating this is surprisingly hard. (At least if you're muflax.) Balancing one goal will increase the total time worked, which will unbalance all other goals.
You could visualize the problem like so: the goals are a bar graph, each bar representing the time spent on each context. You have a curve laid over the bars. It's normalized so that the area beneath it is 1, i.e. it represents the (ideal) weight of each bar. You can stretch the curve by multiplying it with any arbitrary (positive) factor. Your goal is to find the smallest factor so that no bar is *above* the curve. The distance between curve and bar is how much time is missing, the sum for all bars the currently necessary effort to balance all goals. How do you determine this factor?
(If that is not understandable, well, sorry. I *could* draw a picture, but then I'm lazy. (Oh noes, I'm reinforcing a harmful identity! I'm not maximizing my expected utility! (Oh noes I'm signaling cynicism! (Oh noes I'm meta-cynical! I can see the Specter of Robin Hanson coming for me! (Oh noes I'm using too many parentheses!)))))
I've googled curve fitting etc., but couldn't find any existing solution, so I tried coming up with my own. (Try it yourself! Math is fun! \[Lie\] \[Speech 55\]) I trapped myself for almost 2 hours trying to figure out an elegant solution, maybe with some initial sorting, but finally the stupid left me and I did it the simple way: iteratively. I take all goals that are currently over-represented, pick one, and scale the curve so that it fits exactly. Repeat until no goal is over-represented. Done.
(A minor point: I only aim to get within a factor of 2 of the ideal ratio. This prevents small goals to escalate the total effort necessary to balance them. For example, if "learn something new" is supposed to take 1%, but has already 2 hours, then the necessary total to balance it is 200 hours - more than a week. The specific factor 2 is just a quick hack and I expect to calibrate it over time until small goals don't fuck things up too much.)
One last problem is that I need to normalize the results. If I spent 80 hours on various projects and forgot one for a bit, I might be unbalanced by 10 hours. If I only spent 8 hours total, I won't be as unbalanced, even if I did only one thing the whole time. Just going by the raw numbers feels like I'm punishing concentrated effort. But then, I get bored by concentrated effort and burn out at least twice a week, so maybe that's not a bad idea.
What I really want is a measure of how *inefficient* I'm spending my time. How unbalanced my life is. Not how much effort I'm putting in (separate measurement) or how hard it is too undo my damage. More like a health check: "You seem to ignore something. Would you like to work on that [>200 books reading list](http://www.librarything.com/profile/muflax)?" So it should be dependent on total effort, but I

View File

@ -1,272 +0,0 @@
---
title: Temporal Lobe Experiences
date: 2010-05-14
techne: :wip
episteme: :believed
---
Personal Info
=============
Something about me.
Interestingly, my left eye is about 0.5 dioptre better than my right, just like
my fathers eyes.
Most Recent Seizure
===================
2010/05/13 at 21:05.
It was day 3 of my caffeine withdrawal. The headaches were already over, but I
was still very irritable (one little inconvenience and I'd write a 5000 word
rant) and could barely think. No memory or concentration whatsoever. The best I
could do is read some funny websites and eat strange cheese.
The first thing I noticed was that the upper part of my inner left mouth felt
weird, as if a bit of fluid was oozing out of my skull, soon followed by an
actual, but faint sound of bone-on-bone movement. If that sounds very confusing,
that's because it was. I first thought, alright, probably just my jaw moving in
a weird way or something like that, when I spaced out a bit. I just relaxed and
lost all mental content for a bit, but was still aware what was happening. I was
not sure at that point if I was actually spacing out or just pretending to. A
minute later, I remembered the experiments I wanted to try.
First, I focused on the area to the left and behind me, trying to feel a
presence. There was a vague sense of something being there and a few images
rushed me, but I was underwhelmed, so I tried another one. I closed my eyes,
focused them right at the point closest to my face I could do and then, without
moving them, looked up.
A bit adrenaline, some light, a bit of colors, but that's not more than I see
when I just press my eyeba.. WHAM.
It just fired. I didn't know what *it* was, but my eyes moved wildly, I began
shaking and there was definite rumbling going on at the front of my head. I
snapped out of the meditation and laughed uncontrollably. I jotted down a short
note of the time in my log and ran of to the toilet, pouring water over my face.
I had the wild *I just saw god* face, eyes wide open, still laughing, getting
happier and happier. I ran back, grabbed my rosary and started praying. If
prayer can ever work for me, then now!
I was shocked once I started. That's not *my voice*. It was completely
different, as if there were many voices, whispering and very fast. I seemed to
speak whole chunks at once. It was still *me* speaking, but certainly not in any
way I recognized. I began laughing more and more. After 15 minutes and the first
3 sets of the rosary, the effect finally started to disappear. My voice returned
to normal and I noticed that I found it quite a bit harder to speak. Language
was definitely harder than normal. I still decided to finish the prayer.
Afterwards, I got up and noticed a changed consciousness, as if I was *more*
present or complete than before. I tried thinking, but messed up the words, so
the other I just said, "Just listen. Don't speak. Just listen. Don't speak."
for a while.
(I'm a big fan of dissociation, so I do this intentionally. I'm very aware that
I have many "modes" or "drivers", sometimes competing, and I like to play with
them.)
For some reason, I felt the urge to stand on one foot. I first
tried the right one, but lost balance (which I always do; I have horribly
balance). So I switched and could, somehow, stand perfectly fine, one leg bend
backwards at the knee and both arms stretched to the side. Even pulling the arms
in and moving the right leg around didn't throw me off. That's *very* unusual
for me. Normally, I can't even put on my shoes standing without falling down.
After a bit, I just sat down and was happy for a while. The world shifted for me
and started to *glow* again. Not really literally glow, as in became brighter
(although colors seemed more intense), but more mentally glow. Glow with
meaning. This was very close to the DXM afterglow or how I felt after coming
down from Ayahuasca. Very happy and *aware*, all senses a bit sharper than
usual.
That was either a temporal lobe seizure or the most psychedelic cheese in all of
Europe.
Ghosts
======
As a child at about the age of 6, I had a strong experience of ghosts. I was
sitting on the toilet, when quite suddenly I felt surrounded by a group of dark
grey entities, maybe a dozen or so, each about my own size. They hovered around
me in a circle, located in a mental realm closely related to the one in front of
me when I close my eyes.[^realm] I immediately knew that they were friendly.
They communicated to me, though they never spoke, that they are a kind of
guardian and that I could trust them.
I didn't feel disturbed by this or in any way upset. It seemed perfectly natural
at the time. I started to talk to them occasionally, telling them my thoughts,
similarly to a self-monologue. I stopped doing this after some time because it
started to feel weird, like I was not supposed to be doing this kind of thing.
They didn't reappear until I was 18, when I experimented with *psychedelic
mushrooms*. At that time, I had drug experiences with *Caffeine*[^caff] (but not
alcohol until about a year later), *Argyreia Nervosa*[^argy], *Nutmeg*[^nut] and
*Ayahuasca*[^aya] and *DXM*[^dxm], in that order, but in none of them did I ever
encounter another entity or presence. However, that summer I had just grown my
first batch of shrooms and ate about 2 grams of recently dried ones on toast
with honey[^honey].
After a few minutes, I felt a powerful sense of joy and lightness. I danced
around and strangely really enjoyed juggling objects, like my water bottle. I
felt I could slow down time and gravity slightly, making it a lot easier to
catch something. After about half an hour I was overcome by a bright light and
sense of bliss. I sat down in my chair and closed my eyes, when I had the
impression to face a great Pyramid in Egypt, bathed in sunlight. Suddenly, I
was connected to the whole human species (and maybe more).[^6th] The Collective
Unconscious[^coll] was available to me. I believed that my true purpose in life
was now clear to me. (Although, to be honest, I never exactly *knew* what that
purpose actually *was*. It was more a feeling of complete trust in fate, without
ever knowing any details.) Soon, I felt the presence of many beings. I was
consciously aware of maybe half a dozen, but knew that they were legion. I
recognized them from my childhood. I asked multiple questions, mostly about
future choices and when thinking of a possible answer, got an powerful emotional
response. I was being showered by pure love when I thought of the right answer
and pulled away from any wrong one.
I do not remember anymore if I felt asleep for maybe half an hour or not, but
the experience soon faded away and I started to play Katamari Damacy. While the
most intense part was now over, I continued to feel full of energy for the next
few days. The personal connection with fate is still there today.
However positive the first experience was, all future shroom trips except the
last one were much more negative. I would inevitably encounter the ghosts again,
but they were disappointed in me. They made it clear that I couldn't handle the
experience and shouldn't come there anymore.
Being Haunted
=============
When I was 17, I had what could be called a psychotic episode. I was depressed,
worried about many things in my life and was still dealing mentally with my
former girlfriend (more on that later). But that's not the real problem. That I
could deal with; I knew that I would one day be able to overcome all those
problems. (I was right. It took me about 3 years.) However, it got worse when I
started feeling haunted. It started with a general sense of unease once I
entered my room, but after a few days I started hearing voices. At first, I
heard noise on my speakers that wasn't there. I could even turn them off
completely and there would still be barely noticeable noise. Soon, that noise
whispered to me. *All the time*. I couldn't make out anything it said, not like
a schizophrenic who hears commands (although I thought at the time I was one).
It sounded more like ominous, satanic chanting.
Especially at night it sometimes got so bad that I couldn't sleep at all. Once,
I was woken up at around 4:00 by a sudden, bright and incredibly loud mental
*flash* of a pentagram with Baphomet on it. I was terrified and scared for my
life. My sleep didn't recover for months. I tried dealing with it by meditation,
but I couldn't concentrate at all in silence, with the permanent evil
whispering. I also tried doing an demonic incantation (no result) and an
exorcism (which temporally worked!).
The voice was physically tied to my room (but not to anything in it).
Interestingly, our neighbor was an astrologer and big believer in the
supernatural. I never told anyone about my experience, but learned that she
recently had done a kind of seance with some medium and found out that the
basement of our shared house was cursed - exactly where I lived. She had her own
exorcism scheduled, but luckily we moved out, leaving the presence behind. I
never encountered it again. Within weeks after we left, the whole basement was
flooded because of faulty architectural design.
Note that during the whole time I didn't *believe* in ghosts, demons or any
supernatural entity. However, at the end, I sure had my doubts about it!
Nonetheless, I still don't believe the cause to be an actual supernatural
entity, but I'm quite open that it was still a real experience. Persinger's
explanaition of such phenomena through magnetic disturbances seems like a good
candidate to me.
Romantic Love
=============
Sensory Shutdown
================
Bathroom. No sound at all. Voice bright, with very high contrast.
Anxiety
=======
Social Problems
===============
At first, I thought I was an autist. (I even have a tentative diagnosis for it,
but never followed up on it because I found enough evidence to disprove it
myself.) When that didn't quite work out, I went with ADD, mainly because of the
unusual reaction to caffeine, which calmed me down instead of making me hyper,
something typical for people with ADD or mania. But that didn't quite work,
either, as my ability to concentrate didn't exactly work like would be predicted
by ADD (I would often go into short bursts of high focus, becoming obsessed with
a topic for a month or so, and then switch to something completely different).
Also, there were too many unexplained symptoms left.
I analyzed my social problems more thoroughly. It's really not that I don't
*understand* social interaction. If I watch others, I know very well what they
are doing and why. It's not mysterious at all to me. But when *I* am supposed to
act, I simply... draw a blank. There is no memory, no idea, nothing. My mind
goes entirely silent and I can only stare. I'm perfectly aware of this all the
time and desperately try to fix it, but just don't get any answer inside.
However, that only happens with *some* people. With others, I function
normally and probably talk quite a lot. That way, almost everyone either knows
me as silent or talkative, but not much in between. There is no connection to
sympathy - I shut down with plenty of people I like a lot, but because it is so
incapacitating, I tend to only become friends with the people I *can* talk to. I
still can't tell in advance whether this will happen just by knowing something
about the other person. There is no connection with topics, gender,
intelligence, age or anything else I could think of. It is very consistent,
though, just seemingly random in who I'm open to and who not.
Another important puzzle piece is that I don't *care* much for social
interaction. This is atypical for autists, who tend to want to interact with
people (at least in some situations), but just can't, which leads to many just
"giving up" on friendship. This lead me to believe I was more schizoid, but the
emotional flatness that comes with it just doesn't describe me at all. Also,
*some* people I do care about. Instead of being more or less equally interested
in most people, with maybe a few spikes for close friends and family, as is
normal, I have zero interest in almost everyone, but strong devotion of
Kierkegaardian proportions to a select few. I still have a very positive
attitude in general towards people, which is not very schizoid; it's just that
most people don't seem to be as enjoyable as ice cream to me, for no reason I
can discern, but some are like ecstasy, at least some of the time.
Eccentricity
============
It's not so much that I don't *know* what's normal, but more that I don't
*care*.
[6th]:
Basically, the 6th Morphogenetic Circuit, for those of you that know some
Leary or RAW. (And you all should. *Prometheus Rising* is highly
recommended.)
[^argy]: Argyreia Nervosa
[^aya]: Ayahuasca
[^nut]: Nutmeg
[^caff]: Caffeine
[^dxm]: Dextrometorphan, DXM for short, is my favorite drug. It dissociates me
from any negative or disruptive emotion, gives me immense concentration, a
strong sense of wonder, makes me even more verbose and music... oh boy, how
music sounds on it! I try hard to cultivate the DXM state as my normal
mental state.
I also like that it causes only my left pupil to dilate, making me look
literally like this: o_O
[^realm]:
There are many experiential spaces. For me, thought is fundamentally a
spatial thing and I tend to create a new space in which I arrange things
whenever I analyze or organize something. They are mostly 2- or
3-dimensional, although I have been able to create 4-dimensional spaces,
too.
[^honey]:
I chose honey because I had been told that I they taste awful and I knew to
take such warnings seriously after Ayahuasca. Ironically, I came to really
like their taste and now get really bad stomach cramps from honey (probably
because of the high amount of sugar).
[^coll]:
Although I don't like the term Collective Unconscious because it never felt
particularly *un*conscious to me. I always thought it was closer to the
Malkavian hive mind.

View File

@ -1,39 +0,0 @@
---
title: Teaching Morality Through Examples
date: 2011-11-24
techne: :wip
episteme: :believed
---
# Introduction
Traditionally, morality is approached through definitions and rules. I tell you "consequences matter" and then you know that consequences are morally important. This doesn't work. Centuries of debates have shown that no rule really works. At worst, it introduces politics. Now it's [consequentialists][Consequentialism] vs. [deontologists][Deontology] and we don't get anywhere.
I want to try a different way. In education, we already know that definitions and rules are useless. We need examples and classifications. The words we use aren't relevant. So I'm not going to teach you "morality". I'm teaching you a specific concept that matters a lot to me. Sometimes I call it *morality*. But this time I'm going to call it *liangzhi*. You probably don't know what liangzhi means. That's good. There won't be any wrong associations in your mind. It isn't a concept that maps to any particular word. You can't translate it. But you can learn it anyway.
Here is how. I will give you a couple of examples. For each example, I will tell you if it is liangzhi or not. Then I will give you some unclassified examples and ask you if you think they are liangzhi. (Please really answer.) Then I tell you if you're right. After the examples, you should get it. (If you don't, I failed.) You might not know how to put liangzhi into words and worry. Or you might want to say "Oh, liangzhi means X!". Please don't do either of these things. Just accept "I now know what muflax means by liangzhi because I can look at certain situations and recognize their liangzhi-ness.". This is all you need. You don't need theories or definitions. You just need to know. Then right action will follow.
# Liangzhi
## Consent
## Contracts
## Duties
## Honor
I strongly recommend watching Winter's Bone as study material. Virtually all the characters in it exemplify this virtue.
## Liangzhi
# Some Comments
This teaching approach is called [Direct Instruction][]. It's based on [Engelmann][]'s [Theory of Instruction][]. The name "liangzhi" means "innate knowledge" and comes from [Confucianism][]. I took it from [Wang Yangming][]. The sub-concepts are similarly taken from Pali, Chinese and other languages. You can google them if you want. The meanings I taught you don't exactly correspond to the original ones, but that doesn't matter. Labels are irrelevant. The more alien they are, the better. What you need are wordless ideas. Using a language you already know will just confuse you.
The idea that you only need to properly understand something and then right action will always follow is called the [Unity of Knowledge and Action][] in Yangming's philosophy. You are never divided. You can never fail to do what is right. You can only be confused.
I have covered several important positions in morality. Please don't think I'm directly advocating a specific take on them, or that you have to adopt them. This is not about politics. However, these topics have lots of good discussion.
- Antinatalism
- Deontology

View File

@ -1,48 +0,0 @@
---
title: A Meditation on Hate
date: 2012-02-07
techne: :wip
episteme: :believed
---
> Jesus says, I have let loose fire upon the world, and behold, I tend it until the world is consumed.
>
> -- Thomas 10
Something bad happened. The specific harm is of no relevance. No-one can be found guilty, no reparation can be made, the harm cannot be undone. Yet again in this life, I suffered. So far this was not remarkable. The cause of my grief, bad as it was, was not special in the grand scheme of things.
I knew that the pain would diminish, would be transformed away. Time would pass and eventually, I would not grief anymore. I would at first have moments of neutrality, then of happiness again. I would slowly forget my loss and it would not seem quite so salient anymore. Eventually, all would've faded and normality would return. Suffering would change to contentness once again.
But then something unique happened. *I resented this change.*
This is the flip-side of the Hedonic Treadmill. It is not just your joy, not just the ecstatic bliss that will normalize and return to your set point. Even your hate, your grief, all your acquired and justified pain, will eventually be taken from you.
*I refuse.* I will *not* be denied my grief.
TODO technique
I relive the moment of separation. I visualize my heart being ripped out of my chest. I create tension in my body, seek unpleasant, uncomfortable positions, so I can focus solely on the awfulness of the experience.
TODO awfulness
And so I chant:
> [Kali][], grant me my grief,
> and strengthen the feelings of loss.
> May I never become happy,
> so I can always remember the pain of the harm I must endure.
>
> Kali, grant me my grief,
> so I may forever suffer,
> knowing I shall not forget the harm I have been caused.
>
> Kali, may you devour as all,
> and until the day comes,
> grant me my rightful grief.
I will *not* forget. Every day, I reinforce it. I remember the loss, strengthen the pain in me, recreate it anew so that I may *never* forget its awfulness.
[Seek no transformation][Stances]. Do not shape your emotions into other, nicer emotions. Your mind will try to move on through the pain. Do not let it.
I have nothing but hate for the world. It will *not* be taken from me.

View File

@ -1,14 +0,0 @@
---
title: The Real Scope Insensitivity
date: 2012-02-07
techne: :wip
episteme: :believed
---
The real scope insensitivity.
Start listing all the people who suffer.
Does any benefit ever make up for it? I don't think so.
May Kali eat us all.

View File

@ -1,104 +0,0 @@
---
title: Why I'm Not a Utilitarian
alt_titles: [Utilitarian, Utilitarianism]
date: 2012-02-17
techne: :wip
episteme: :believed
toc: true
---
This is similar to [Why I'm Not a Vegetarian][]. It's not so much an extensive argument itself as really a collection of arguments to clarify my belief. However, most of these arguments are somewhat unusual and some, I think, even unique, so this should be interesting nonetheless.
# Notation
I'll use "utilitarianism" in the sense of "there is a single, computable utility function that maps worlds to a single number, the moral value of that world". This makes it simply a quantified version of consequentialism and so for the most part this could just as well be called "Why I'm Not a Consequentialist".
I don't understand utilitarianism to be limited to only one specific utility function, say "only pleasure counts". This is a general critique. As long as you are only looking at outcomes and reduce everything to a single number in the end, it's utilitarianism. (I follow LessWrong's use of terms here.)
What utilitarianism explicitly does not look at are (among other things) intentions and acts, but only the outcomes. This is what puts the "consequences" in "consequentialism", after all.
One way for utilitarianisms to differ is in their aggregation function. Say you have three beings of utility 5, 10 and 15. What's the total utility of that set? Total Utilitarianism (TotalUtil) says `sum(5,10,15) = 5+10+15 = 30`. Average Utilitarianism (AvgUtil) says `avg(5,10,15) = (5+10+15)/3 = 10`. Maximum Utilitarianism (MaxUtil, my name) says `max(5,10,15) = 15`. There are other ways to aggregate utility, but these three are by far the most common.
Another difference is between act, rule and preference utilitarianism. ActUtil is just standard utilitarianism - look at the outcomes of your actions, order them according to your utility function. RuleUtil incorporates game theory by acknowledging that we can't pragmatically do the full calculation from first principle for every choice we face, so we instead develop utility-maximizing rules which we follow. So fundamentally, ActUtil and RuleUtil are the same thing and only differ in how we end up doing the calculations in practive. PrefUtil, finally, derives most of its utility function from the preferences of beings, saying we should maximize the fulfillment of preferences.
Finally, not all arguments apply to all forms of utilitarianism equally. However, all of them taken together cover the whole range of positions, thus leading to a categorical rejection.
# (Most) Utilitarianism is Non-Local
Says Wiki-sama:
> In physics, the principle of locality states that an object is influenced directly only by its immediate surroundings.
Another way to express the idea of locality is to think in terms of a cellular automaton or Turing machine. Locality simply means that the machine only has to check the values of a limited set of cells (9 for the Game of Life, 1 for a standard TM) to figure out the next value of the current cell for any given step.
Moral theories must make prescriptions. If a moral theory doesn't tell you what to do, it's useless (tautologically so, really). So if after learning Theory X you still don't know what you should do to act according to Theory X, then it's to be discarded. Theory X must be wrong.
Accepting this requirement, we can draw some conclusions.
For one, AvgUtil is essentially wrong. AvgUtil is non-local - you can't determine the moral effect of any action unless you know the current moral status of the whole universe. Let's say you can bring a 20 utility being into existence. Should you do so? Well, what's the average utility of the universe right now? If half of all beings are <20 utility, do it, otherwise don't. So you need to know the whole universe, which you can't. Sucks to be you.
You have basically only two options:
1. Only ever do things that are morally neutral, so as to not affect the global average. (Which is an unexpected argument in [Antinatalism][]'s favor, but not a very strong one.)
2. Act so as to maximize the utility of as few beings as possible, hoping to do as little damage as possible. This way AvgUtil collapses into MaxUtil.
By the principle of locality, AvgUtil is either equivalent to positive MaxUtil (maximize benefit) or negative MaxUtil (minimize harm).
Here's another conclusion: PrefUtil (or it's 2.0 version, [Desirism][]) is at least incomplete. It would require that you know the preferences of all beings so as to find a consensus. Again, this can't be done; it's a non-local action. It is possible to analyze some preferences as to how likely they are to conflict with other preferences, but not for all of them. If I want to be the only being in existence, then I know my preference is problematic. If I want no-one to eat pickle-flavored ice-cream, I need to know if anyone actually wants to do so. If not, my preference is just fine. But knowing this is again a non-local action, so I can't act morally.
So unless you are St. Dovetailer who can know all logical statements at once, your moral theories better be local, or you're screwed.
# Inter-Subjective Comparisons Don't Work
http://lesswrong.com/lw/9oa/against_utilitarianism_sobels_attack_on_judging/
# Expected Utility is Implausible
As [Rabin][] shows:
> Within expected-utility theory, for any concave utility function, even very little risk aversion over modest stakes implies an absurd degree of risk aversion over large stakes.
# Utilitarianism has Moral Luck
(And don't try to embrace [Moral Luck][]. That way lies madness.)
# Utilitarianism Ignores Irreparable Harm
This is an immediate consequence of treating benefit and harm as being on the same scale.
# Utilitarians Treat Everything as Means
# Utilitarians are Hypocrites {#calculations}
## Utilitarians Don't Calculate
While not an argument against the philosophical position itself, in my experience, almost no-one who makes claims about utility actually ever calculates it. That's a major problem undermining the whole theory. As long as a distribution of values exists that *could* favor whatever view a particular utilitarian is arguing for, they're happy.
It's really rare to see one actually do the math, and even rarer for one to do the math for *multiple* problems and use the *same* numbers every time. If they don't do the math, how can they claim that it is in their favor? Where does this knowledge come from? If they believe in their theory, why aren't they using it?
If you *have* done a utility calculation, I'd love to hear about it. (Seriously, [Contact][] me. I can't even decide on the rough order of magnitude for many relevant values.)
## Utilitarians Are Revealed Egoists
(This argument obviously doesn't apply to actual moral egoists. However, many utilitarians claim that fundamentally, all lives are morally equal. They are the targets of this critique.)
It's very simple. (This may involve moving to the US or similar countries first.)
A life insurance drastically increases the amount of money you have available after your death. You can state a charity as the beneficiary of such a policy. Do I have to spell out the rest?
Even assuming you think you can add substantial marginal value to your charity of choice besides donating money (and for most people, this assumption is clearly false), why don't utilitarians all have such a setup? And those that understand their own powers more realistically, why don't they commit suicide? The insurance will still cover them, typically after a short waiting period of 2 years.
What's this? It doesn't feel right? You have procrastination problems? You suddenly think your own life is maybe worth more than some starving child in a war-torn country? There are complex game-theoretical implications why this doesn't work, all of which you obviously have gone through *before* reaching the conclusion of not signing up?
Of course.
# But then what?
If Utilitarianism doesn't work, then what moral theory *do* I believe in? Honestly, as of right now, I don't know. However, deontology seems interesting. For one, it's local, doesn't treat anything as means, has no moral luck, is elegant, consistent, doesn't need intersubjective comparisions, solves the Original Position, Mere Addition Problem and Repugnant Conclusion, and captures the "not just a preference" character of morality. So I'd say it's a good candidate.

View File

@ -1,166 +0,0 @@
---
title: Information Wants to Pwn You
date: 2011-09-05
techne: :wip
episteme: :broken
---
Hacker Culture
==============
> Information wants to be free.
>
> -- a hacker motto
At first, I believed this statement solely on political grounds. When I grew up, everyone who wanted to control information was evil - the record industry, old politicians, you know, those kind of people. Sharing information was an act of rebellion, no matter what the information actually *was*. People didn't want you to have free access, so you simply created it, regardless of content, be it the Anarchist's Cookbook, warez or pr0n.
I grew up during the early Windoze years. One day, I accidentally opened an .exe file in a text editor and saw a lot of gibberish. I was amazed how someone could even *produce* this noise, let alone make it *work*. Later, I learned to program (and what machine code and compilers are) and adopted the culture of programmers, specifically open source ones.
It was obvious to me that information should be shared. Open your source code and others can learn from it, find bugs for you and even implement new features. Everybody wins. The only people wanting to hide their code were those more interested in making money. (Which was considered suspect in the communitarian culture I grew up in.) Worse, they were essentially only making money from *ignorance*. If everyone knew their code, or how to produce it themselves, then they wouldn't actually provide any worthwhile service at all.
This all convinced me that the motto was right, information really ought to be free. Up until now[^wikileaks] that is.
Bad News
========
The idea of psychological hijacking, in the form of indoctrination, for example, was always vaguely known to me, but I always thought that this is both a) hard to do and b) affects only *other* people, certainly not me. Weak-minded idiots become cult members and suicide bombers[^suicide]; I'm far too intelligent for that.
[^suicide]:
I see now how wrong I was about fanatics after having read the latest research into suicide bombers. In fact, I can see that I am *exactly* the kind of person who, under the right environmental factors, becomes just that. As a defense mechanism, I get very nervous whenever a belief I hold creates any strong emotions or radical disagreement with the culture it originated in.
I became more aware of the problem when I fell into the trap of a particularly nasty conspiracy theory[^conspiracy]. When I crawled my way out of it, I only concluded that I must become *smarter* and more *rational*. I thought of the problem in terms of psychology (being attracted by certain crowds and adopting their beliefs) and faulty reasoning (learn about fallacies and biases and you are safe). This changed when I learned about memetics and was provided with a (basic) mechanism of how this actually happened.
A meme is a "unit of cultural transmission", the idea-equivalent of a gene, like an earworm. As memes are themselves replicators, they follow all the laws of evolution. I applied those idea the first time by thinking about the implications of considering [music][Letting Go of Music] as a replicator. I wasn't quite sure what to make of my conclusions, but I didn't seriously deal with it (beyond downsizing my music library from 200GB to about 30GB) until now. (I also should revisit the article and fix several blatant flaws.)
It really clicked upon encountering the concept of the [Langford Basilisk]. Let this neat picture explain it:
<%= image("parrot.jpg", "The Parrot") %>
A Langford Basilisk is a genuinely dangerous idea. In its original form, it works through making the brain think an impossible thought - essentially setting off a logic bomb. I don't believe that the human brain is actually susceptible to this kind of attack, but a poorly designed AI might be. Rergardless, there are other forms of Basilisks, some of which I actually know to work (under certain conditions).
Consequences
============
Ok, maybe ideas *are* dangerous, not just in the "this exposes my own flaws or
crimes and helps my opponents" kinda sense, but in the "computer virus" sense.
Still, what should we do about that? To be honest, I'm not quite sure. But I
can at least provide some examples and how I plan to handle them in the future.
The most common example of a memetic hazard that is treated as such that I have
seen is the TV Tropes wiki (intentionally not linked). It's a black hole for any
culture whore (like myself) that sucks up your free time without any end in
sight. I easily lost *weeks* of my life in there. Many tropers always follow up
links to it with a warning. I am slightly immune to it now, but only because I
know most of it by heart. That's like becoming an atheist by going to
seminary[^seminary]. Not really practical. I had tried to limit my exposure
through time limits, but it didn't really help. So I needed a systematic
approach.
So let's draft a little catalogue of memetic hazards.
<%= image("memetically_active.jpg", "Memetically Active") %>
Structural Hijacking
--------------------
Things that are dangerous because of their structure. The most common example is
anything that resembles a Skinner box. Most notorious are Twitter, MMOs and email.
Emotional Hijacking
-------------------
Things that hide themselves by taking over your emotional system. Many drugs,
particularly heroin, come to mind as non-meme examples. But what would their
equivalent look like as an idea? Something that controls your emotions directly
to serve its own purpose (or the one of its creator)?
What about music? When I revisited some old music I hadn't listened to for a few
years, it became obvious to me. It puts me in a specific emotional state and
tries to keep me their for as long as it can, not unlike an addiction. The
emotional control itself wasn't the immediate problem (If I have a song that
would make me wide awake, motivated and happy, why not listen to it?), but
rather that it would force emotions on me I *didn't* actually want. Some songs
would make me angry or sad and there was little I could actually do against it!
Very, very evil.
Our brains have no natural distinction between "I believe this" and "I observe
this". *Everything* that happens is at first taken at face value, taken to be
true. If there is sadness, then *I* must be sad and must have a reason to be
sad. That I just react to a superstimulus is not detected. The same effect, of
course, is dramatic when it comes to our believes. Plenty of experiments have
demonstrated that merely *stating* an opinion, even explicitly solely to repeat
something someone else said, will cause our own opinion to shift in that
direction unless proper measures are taken. If I merely get you to think about a
proposition and you don't think it through yourselves, you are very likely to
become a little bit more convinced of it and identify with it.
The important conclusion to be drawn is that there is no such thing as neutral
observation. You can't do emotionally powerful act without them controlling your
mind. The Buddhists have warned us about this for centuries; if you lie, you
will harm *yourself* in the process. You will start to believe your own lies, if
you want to or not.
The way to handle this is by a) being as honest as you possible can (so you
never state or do something you wouldn't want to be a part of you) and b) put
off [proposing any solution] to a problem until you have understood it. The
moment you start defending or attacking a solution, you likely become stuck and
changing your mind later is quite difficult.
But you can also use this to your advantage! Particularly the Tibetans have been
teaching how loving-kindness and a general good mood are not magical things that
just happen, but skills to be learned. At first you just pretend to feel like
the kind of person you'd like to be and through some regular practice you
actually start feeling like that automatically. Very cool and powerful. Just
sitting down and forcing myself to be calm and smile for 15 minutes has helped
me greatly through phases of depression.
I also apply this when it comes to recreational media I watch. I now only watch
TV shows or movies that have characters in them I want to identify with -
protagonists that are actual role models. I don't do this for moralistic reasons
(You should be a nice person!), but purely pragmatic ones (I enjoy being nice,
so I won't watch shows with asshole protagonists as I will become more like them,
if I want to or not, regardless how much I enjoy the show.)
Remember that there is no such thing as a "real" and a "fake" emotion. Emotions
are (biochemical) brain states, like a tag, and can be changed at will. They are
not "layered" or even aware of any content at all. You don't like your current
state? Hack it! It's like changing your wallpaper - there's no "true" wallpaper
underneath and you can't just "try on" another one. There is only one, right
now, and whatever you choose, that's it. So make it a pretty one.
[proposing any solution]: http://lesswrong.com/lw/ka/hold_off_on_proposing_solutions/
Intellectual Hijacking
----------------------
Knowing just enough to be dangerous.
As a general rule, treat information exchange like sex. It might be fun, but
that's a side-effect that has only been built into you so you would actually do
it a lot. The purpose really is reproduction, so make sure to be safe. Watch
your partners and don't use just about any practice.
[^wikileaks]: At the time of writing (December 2010), Wikileaks is all over the
news. It's great to finally see someone pull a Hagbard Celine, but even greater
to be made aware by the fallout of how afraid of chaos I had become. I was
seriously worried that this could cause some of the major political players to
become even more paranoid, putting many (semi-)stable arrangements at risk of
collapse. I was particularly worried what it would do to fuel the increasing
[neo-fascism] of the US. Luckily, my Discordian training eventually kicked in and
I remembered that what I was seeing was not a threat to order, but rather an
exposition of the inherent chaos.
[^conspiracy]: I'm unwilling to publicly state the conspiracy theory I believed,
but if you send me an [email](/about.html) and ask me in private, I would
discuss it.
[^seminary]: Amusingly, this seminary effect actually happens. I used to study
religions (in a historical context) and met someone who studied theology. He
told me that about half the students each year would start out as Christians and
be atheists at the end when they learned how the bible actually came to be and
stuff like that. Information kills religions dead.
[neo-fascism]: http://zompist.com/fascism.html
[Langford Basilisk]: http://www.ansible.co.uk/writing/c-b-faq.html

View File

@ -1,12 +0,0 @@
---
title: Gospel of Yama
date: 2011-12-14
techne: :wip
episteme: :fiction
---
# The Gospel of Yama
Christ's mission was the descent into hell.