muflax65ngodyewp.onion/content_blog/personal/crystallization.mkd

10 KiB

title date tags techne episteme slug
Crystallization 2012-01-11
ai
personal crap
:done :speculation 2012/01/11/crystallization/

Just some personal stuff. I tried writing this privately for the last few days, but avoided the work and didn't get anywhere. For some reason, public posts just work better. I apologize for the inconvenience. I plan to eventually split my content into "stable" (main site), "in-progress, somewhat experimental" (good parts of this blog, unpublished drafts) and "incoherent ranting I need to do in public or my brain gets stuck" (some unsorted note file or something). Expect it within a month or so.

Stuff's beginning to make sense. I got my wake-up call and some motivation to clean things up. Some former attachments that have been sucking up my time have disappeared.

I'm currently facing three problems:

  1. Is powerful AGI possible within my lifetime?[1] If so, how can I best help achieve it?

  2. What's a good[2] instrumental career for me to pursue?

  3. How can I prevent myself from being deeply unsatisfied with my choices? How can I make life suck at most a tolerable amount?

And because I'm running out of time, I'll have to solve these problems now. Like with strict deadlines and milestones and everything.

AGI

For the last few months, one sobering thought was that AGI will take a lot more time than I thought[3]. Back in 2005, I kinda expected a Singularity by 2030 at most, so I didn't take much care to plan for my future. Why bother with careers when technological progress is your retirement plan?

According to Luke, even SIAI thinks AGI is at least 3 more decades away. (Shane Legg is pretty much the only serious scientist I can think of that believes in early AGI.) That's a lot of time, and makes SIAI's strategy of outreach quite plausible. It's too early to actually focus on research and better to focus on enabling research later. Besides, I'm not a world-class mathematician, so I wouldn't be able to contribute directly anyway. (And I agree with the assessment that we need mathematicians and analytical philosophers, not engineers.)

So some implications: what AGI research needs right now is money and volunteers that actually do something. (Louie Helm recently noted that he couldn't get one of 200 volunteers to spread some links around for SEO. That's... just wow. I know very little about charity work; maybe that's not unusual. But it's still appalling. (And I'm no better - I thought backing up a Minecraft claim was an actual good use of 10 hours of my time.)

This means that me helping with any research - and I don't have the delusion of being able to actually do AI research myself[4] - isn't gonna happen and the best I can do is help others set up a research environment. So money and improving social environments. This leaves many of my mental resources open for personal projects. That's good. (But I'll have to work for money and I don't like that now, but I think after a year or two, I'll get used to it. If not, I can still try teaching meditation to delusional fools people interested in unusual and/or hardcore practice. Kenneth Folk seems to manage, so maybe there's enough of a market.)

In Which muflax Digresses

But before we get to the career thingy, let's pin the AI thing down a bit more. Why am I interested in the first place? I don't really care for math research and personally I'm much more interested in history and efficient human learning, so AI is not a primary interest of mine. I also don't care about existential risk. Like, at all. (I have a hard enough time caring about muflax(t + 1 year).) But there's some potentially really cool insight in AI: algorithmic probability. It's our best guess yet for such a thing as general intelligence, in the sense that there is an ideal algorithm (or group of algorithms) for optimal problem solving and learning. The idea of algorithmic probability as Occam's Razor seems very interesting and fruitful. So I'm focusing a lot of my time on understanding this.

In order to do so, I'll write a kind of introduction to Solomonoff Induction, Kolmogorov Complexity, AIXI and some questions I'm currently facing. I'll probably turn this into a LW post once I properly understand it myself, have it polished and got some feedback. I'm also writing a German presentation for a class with n=1. (Yes, literally everyone except me dropped out, but hey I love AIXI, so I'm not letting that stop me. If Schopenhauer can lecture to an empty room, then so can I.)

My normal essay-writing method, especially for class, goes something like this: Start 4 months ahead of time. First month, do nothing. If someone asks you how you're getting along, say "fine". Next month, get a big cup of coffee and skim through the entire literature in one sitting, write down an outline of the paper, collapse. Don't do anything but play videogames for a few days. Next month, get even bigger cup of coffee and write "rough draft", i.e. fill in everything, cursing at how lazy you've been and how little you understand. Takes about 2-3 days. Collapse, sleep for 16 hours, do nothing for a week. Form the firm intention of editing and carefully checking your essay. Ignore intention until 1 day before deadline. Curse, try to fix as many mistakes as you can, hate yourself. Done.

Due to scheduling problems and so on, I can't use this approach this time. So I'm trying something new. I'm writing it live. Normally when I write class material, I don't think about the material. (This is a bug.) Thus my understanding is way too superficial and bullshit-y. However I noticed that back in high-school when I was practicing physics with a friend, I actually understood the stuff because I was forced to explain it to someone who was constantly poking holes into my theories. This friend had the patience to let me rationalize all day long, but he didn't let me get away with bullshit. (He benefited from it because I eventually did arrive at the right explanation, something he had trouble with.) So this time, I'm letting actual questions guide my writing process. More next post.

Career

It's time to make people take you more seriously. If they don't respond to your demands within a half-hour of reading this, start killing the hostages. -- my horoscope for this week

Last year I got my first job ever, doing some embedded systems programming. I learned two things: I really like programming, and I really don't like hardware and anything related to it. So I'm now changing my specialization towards high-level programming and the web. This has another advantage: several projects I really like (including LessWrong and PredictionBook) have way too few programmers and many open problems. Jackpot! I can improve my skills and use it to build some reputation. The good thing is that I already know much of the underlying architecture, I just don't have much experience doing web work and no clue about interfaces. But I've been going around claiming that "learning is a solved problem", so I better shut up and demonstrate it.

Unfortunately, this specialization will mean I'll have to drop most of my hobbies. This is not so bad - thanks to my hyper-experimentation with different learning methods, I can actually convert almost everything into low-maintenance.

I'm not sure where I should be looking for a programming job after I get my degree, so I'll prioritize figuring this out. Not even sure about the country.

Sticking with Stuff

Honestly? This section has been sitting here for a day, empty. I have some ideas how to go about this, but right now, I don't think talking about it would help, and I'm not even sure I can articulate it just yet. I feel I first have to make a mess and then can I go about cleaning it up.

So I'm off to write about Solomonoff induction, learn more anatomy and maybe do some philosophy reading on the side. (And when I can't think, play some BG2.) Not much else this month.

Footnotes

[1] Why limit AGI to my lifetime? I don't have the caring capacity to fight for other people. If I can't benefit from it, then realistically, I'm not going to do it. I don't know if this is an expression of my real values, or just a limitation of my current hardware. In practice this won't make much of a difference, so I have to take this into account. (I do take care not to pursue options that would prevent me from changing my mind on the matter, like wireheading myself via meditation practice.)

[2] Why not best career? 'cause I tend to get stuck in perfectionist planning. I'll spend years figuring out how to raise my decision optimality from 80% to 90% instead of just going with the 80% option and doing something with it. I would already speak Japanese fluently if I hadn't spend nearly 2 years just experimenting with new techniques and instead just used my best guess at the time. So I've decided to actively limit my exploration phase.

[3] When I say that I expected AGI soon, I rather mean that I expected one of two things - a Singularity soon or never. I was favoring "never" for mostly anthropic reasons. The Great Filter looked very convincing, and AGI without expansion seems quite implausible, so I shouldn't expect to ever see AGI myself. Recently, I've become a bit more skeptical about the Great Filter, but more importantly, I started taking AGI much more seriously once I saw the beauty of algorithmic probability. I do plan on re-visiting the Great Filter soon(tm), but I'm currently a bit swamped with projects. Once I have my antinatalism FAQ done, maybe.

[4] I'm probably smart enough in general terms to invent AI, given indefinite time and resources. But we have neither, so I'll defer to the people with better intuitions and established knowledge bases. No point in me spending 5-10 years learning research-level math that I could use to do something fun and earn some money to pay someone with probably decades more experience.