muflax65ngodyewp.onion/content_blog/algorithmancy/an-acausal-app.mkd

6.7 KiB

title date tags techne episteme
An Acausal App 2012-04-15
:wip :speculation

I've been practicing acausal magic for a while now. In fact, I'm been juggling so many spells lately, I'm having trouble remembering them all. So I wrote an app.

(Intellectual hipsters, beware! Algorithmancy might be going mainstream soon, so you better get in on this now before everyone's doing it. The Chinese are already moving in on the market. And you know there'll be Asian time-traveler everywhen once their parents decide it's important. )

Let's get started with the motivation.

So I'm in my favorite supermarket and I wonder when I'm allowed to eat pizza again. I can't properly digest grains and get stomach cramps and other nasty stuff, but I love pizza, so I have a kind of deal with myself where I only eat pizza once a month or so. Of course I forgot when I'm allowed to eat again. Strike 1.

Now I'm considering ice-cream. I want to lose weight, but then it's ice-cream. So I consider a trade-off: I know I won't feel bad about it once I'm home, but saying no right now sucks. Is a week of slight craving for ice-cream worth the self-approval of sticking to a decent diet? I'm not sure. Strike 2.

I decide against the ice-cream and want to buy some chicken. I know that I have a bad habit of forgetting about food and so buying anything that might go bad is a big risk. This chicken only lasts a few days. Bad. If only I could make a contract with myself in 3 days that I'll buy the chicken if and only if I'll eat it then. Strike 3 and you're out.

Time to solve these kinds of problems.

These problems are fundamentally all game-theoretical trades. The only problem is that the agents involved in them are temporally separated. Beeminder is already an awesome way to cooperate in such situations. The main problem is that you can't spontaneously make a contract there, nor does it support some of the unique trade-offs I described.

Enter Acausal Trade, a new way to arrange a trade in the multiverse. (Ensuring the consent of all participants is left as an exercise to the user.)

How does it work?

Remember the pizza - I will enjoy it now, but feel slightly sick for 2 or 3 days afterwards. So I make contract for the next 3 days, thus involving muflax(0) (today) up to muflax(3). Every one of us states their expected level of enjoyment they would get out of the contract. In this case, it would look like this:

[]

Scores go from -5 (horrible) to +5 (awesome). Every participant has to personally agree to the deal. (There is no "yes to all". This is completely intentional.) When agreeing, channel the participant (see your guide to the multiverse on how to do that) and let them agree or disagree. Adjust the deal if necessary until everyone consents. Done.

The app will send a message to every participant. In this case, you'll get a notification once a day for the next 3 days. The message has several purposes. Most obviously, it's a reminder. More importantly (you could just use a straightforward todo app for reminders), it gives every participant an opportunity to revise the contract.

muflax(1) may have been channeled wrong. muflax(0) might think a pizza aftermath feels like a -1, but muflax(1) actually thinks it's a -3 and re-adjusts the score accordingly. muflax(0) totally forgot how bad the cramps can get. (muflax(1) can change the score at any time during their day.)

There are several advantages to that. First, your ability to predict scores should get better. My (informal) predictions have improved since I started using PredictionBook, so I expect it to generalize. This makes later trades more honest. I suspect that underestimating the negative effects of productivity contracts is a major reason they fail so frequently. Future-you isn't sabotaging you out of spite, you know.

Second, it gives other participants a better way to state their consent. I strongly suspect that respecting consent is a crucial feature of morality, and I don't have a perfect track record when it comes to trades with future-me. Being able to revoke consent at a later time makes this explicit and should help increase past-me's luminosity. It's harder to be evil when you are aware of the damage you're doing. (Revoking consent ends the contract. You might try to arrange a new one if you want.)

Finally, well, how do you arrange contracts with the future? If I'm involved in a trade, I want to know about it. I can't walk up to someone's house, pretend I'm buying their bike for 5 bucks, put the money in the mail box and take the bike with me. I actually have to talk to them, you know. So how can you say you're trading with future instances of yourself when you don't contact these future instances? It's a bullshit rationalization, nothing more. So every participant gets a message and has to consent twice - once when entering the contract, once as soon as they find themselves in possession of the phone.

Another feature is that it keeps track of time imbalances. If you constantly arrange contracts that are bad for future-you, then you might lose their support. Try to arrange some deals that benefit them as well! (There is one major problem with that, though. Contracts with future participants are neat, but what about past participants? That'd be really cool! But how do you get their consent? They don't get access to the phone anymore. I'm still thinking about a solution to that problem.)

All supported contracts and the interface in general are still completely in flux, but it's already usable. (Arranged some contracts already.)

Source and [binary] are freely available. It runs on Android 2.3.6 because that's what's on my phone. I have no idea if it works on any other Android version. This is a totally experimental prototype. I've literally written it in the last 24 hours. It might eat your cat or decrease your measure. I'll play around with it for a while and if I still use it in a couple of weeks and have settled on an interface, I'll make a proper release to the Android market.

(Disclaimer: muflax neither endorses nor denies algorithmic philosophy. Side-effects may include anxiety, pareto-inefficient trades, basilisk nightmares and unwarranted commitments to alien ontologies. Ask your metaphysician if trading with the future is right for you.)

(And if you're saying that this is just ad-hoc commitment contracts and the talk about acausal trade is just belief attire, well, then you're probably right, but hey, algorithmancy sounds so much better than self-help, right guy? Guys?)