ported all blogs

master
muflax 2012-04-16 15:33:11 +02:00
parent bed1832314
commit 8892b85de0
85 changed files with 446 additions and 378 deletions

Binary file not shown.

After

Width:  |  Height:  |  Size: 26 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 21 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 82 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 42 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 182 KiB

BIN
content/pigs/gol_1.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 3.8 KiB

BIN
content/pigs/gol_2.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 13 KiB

BIN
content/pigs/gol_3.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 26 KiB

BIN
content/pigs/grief.jpg Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 7.2 KiB

BIN
content/pigs/hill1.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 37 KiB

BIN
content/pigs/hill21.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 20 KiB

BIN
content/pigs/hill22.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 37 KiB

BIN
content/pigs/hours1.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 4.8 KiB

BIN
content/pigs/nlamnqf7jg.jpg Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 22 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 76 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 26 KiB

BIN
content/pigs/pie_hole.jpg Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 21 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 30 KiB

After

Width:  |  Height:  |  Size: 32 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 18 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 98 KiB

BIN
content/pigs/twoface.jpg Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 18 KiB

View File

@ -5,8 +5,11 @@
[Actual Freedom]: http://www.dharmaoverground.org/web/guest/dharma-wiki/-/wiki/Main/Actualism
[Adolf Hitler]: http://en.wikipedia.org/wiki/Adolf_Hitler
[Aeron]: https://en.wikipedia.org/wiki/Aeron_chair
[Ahimsa]: https://en.wikipedia.org/wiki/Ahimsa
[Ajatasattu]: https://en.wikipedia.org/wiki/Ajatasatru
[Ajita Kesakambali]: https://en.wikipedia.org/wiki/Ajita_Kesakambali
[Aki Sora]: http://en.wikipedia.org/wiki/Aki_Sora
[Algorithmic Probability]: http://en.wikipedia.org/wiki/Algorithmic_probability
[Anatta]: http://en.wikipedia.org/wiki/Anatta
[Anhedonia]: http://en.wikipedia.org/wiki/Anhedonia
[Anicca]: http://en.wikipedia.org/wiki/Anicca
@ -14,60 +17,97 @@
[Antinatalism]: http://en.wikipedia.org/wiki/Antinatalism
[Apocrypha Discordia]: http://appendix.23ae.com/apocrypha/index.html
[Arising and Passing Away]: http://www.dharmaoverground.org/web/guest/dharma-wiki/-/wiki/Main/The%20Arising%20and%20Passing%20Away?p_r_p_185834411_title=The%20Arising%20and%20Passing%20Away
[Arrow Paradox]: http://en.wikipedia.org/wiki/Zeno%27s_paradoxes#The_arrow_paradox
[Astronomical Waste]: http://www.nickbostrom.com/astronomical/waste.html
[Ayahuasca]: http://en.wikipedia.org/wiki/Ayahuasca
[B-theory]: http://en.wikipedia.org/wiki/A-series_and_B-series
[BG2]: https://en.wikipedia.org/wiki/Baldur%27s_Gate_II:_Shadows_of_Amn
[Benatar]: http://en.wikipedia.org/wiki/David_Benatar
[Better Never to Have Been]: http://www.amazon.com/Better-Never-Have-Been-Existence/dp/0199296421
[Book of the Dead]: https://en.wikipedia.org/wiki/Bardo_Thodol
[Brainfuck]: http://en.wikipedia.org/wiki/Brainfuck
[Bushido]: http://en.wikipedia.org/wiki/Bushido
[Catuskoti]: http://en.wikipedia.org/wiki/Catu%E1%B9%A3ko%E1%B9%ADi
[Child sexual abuse]: http://en.wikipedia.org/wiki/Child_sexual_abuse
[Choronzon]: https://en.wikipedia.org/wiki/Choronzon
[Convict Conditioning]: http://www.dragondoor.com/b41/
[Core Dump]: http://en.wikipedia.org/wiki/Core_dump
[Crank]: http://en.wikipedia.org/wiki/Crank_%28person%29
[Crocker's Rules]: http://wiki.lesswrong.com/wiki/Crocker%27s_rules
[Crown of Thorns]: http://en.wikipedia.org/wiki/Crown_of_Thorns
[Crusader Kings II]: http://www.paradoxplaza.com/games/crusader-kings-ii
[DXM]: https://www.erowid.org/chemicals/dxm/faq/dxm_faq.shtml
[Deontology]: https://en.wikipedia.org/wiki/Deontology
[Desirism]: http://commonsenseatheism.com/?p=2982
[Dirk Gently TV]: https://en.wikipedia.org/wiki/Dirk_Gently_%28TV_series%29
[Discordianism]: http://en.wikipedia.org/wiki/Discordianism
[Divine Simplicity]: http://en.wikipedia.org/wiki/Divine_simplicity
[Dukkha]: http://en.wikipedia.org/wiki/Dukkha
[Dunbar's Number]: http://en.wikipedia.org/wiki/Dunbar's_Number
[Egoism]: http://en.wikipedia.org/wiki/Ethical_egoism
[Eliezer Yudkowsky]: https://en.wikipedia.org/wiki/Eliezer_Yudkowsky
[Entheogen]: http://en.wikipedia.org/wiki/Entheogen
[Epistemology]: http://en.wikipedia.org/wiki/Epistemology
[Eternal Return]: http://en.wikipedia.org/wiki/Eternal_return#Friedrich_Nietzsche
[Evil Is Cool]: http://tvtropes.org/pmwiki/pmwiki.php/Main/EvilIsCool
[Evil Trope]: http://tvtropes.org/pmwiki/pmwiki.php/Main/EvilTropes
[Existential Risks]: http://en.wikipedia.org/wiki/Risks_to_civilization,_humans_and_planet_Earth
[Expanding Circle]: http://en.wikipedia.org/wiki/Peter_Singer
[Experience Machine]: https://en.wikipedia.org/wiki/Experience_machine
[Faust]: https://en.wikipedia.org/wiki/Goethe%27s_Faust
[Fetter]: http://en.wikipedia.org/wiki/Fetter_%28Buddhism%29
[Five Hindrances]: https://en.wikipedia.org/wiki/Five_hindrances
[Flanging]: http://en.wikipedia.org/wiki/Flanging
[Fomenko claims]: http://en.wikipedia.org/wiki/New_Chronology_%28Fomenko%29#Fomenko.27s_claims
[Fomenko]: http://en.wikipedia.org/wiki/New_Chronology_%28Fomenko%29
[Frankl]: http://en.wikipedia.org/wiki/Man%27s_Search_for_Meaning
[Hanlon's Razor]: http://en.wikipedia.org/wiki/Hanlon%27s_Razor
[Hedonic Treadmill]: http://en.wikipedia.org/wiki/Hedonic_treadmill
[Higher Criticism]: http://en.wikipedia.org/wiki/Historical_criticism
[Hot Fuzz]: http://en.wikipedia.org/wiki/Hot_Fuzz
[Hypothetical Consent]: http://simonamey.com/Philosophy/Entry.php?entryid=314
[Implied Consent]: http://en.wikipedia.org/wiki/Implied_consent
[Inerrancy]: http://en.wikipedia.org/wiki/Biblical_inerrancy
[Jetpack Hitler]: http://tvtropes.org/pmwiki/pmwiki.php/Main/StupidJetpackHitler
[Jhana]: http://en.wikipedia.org/wiki/Dhy%C4%81na_in_Buddhism#Usage_of_jh.C4.81na
[Judea Pearl]: http://en.wikipedia.org/wiki/Judea_Pearl
[Julian Jaynes]: http://en.wikipedia.org/wiki/Julian_Jaynes
[Kai Lexx]: http://en.wikipedia.org/wiki/Kai_(Lexx)
[Kali]: http://en.wikipedia.org/wiki/Kali
[Kant]: https://en.wikipedia.org/wiki/Immanuel_Kant
[Kasina]: http://en.wikipedia.org/wiki/Kasina
[Kasina]: http://en.wikipedia.org/wiki/Kasina
[Kenosis]: http://en.wikipedia.org/wiki/Kenosis
[Kerghan]: http://en.wikipedia.org/wiki/Arcanum:_Of_Steamworks_and_Magick_Obscura
[Ksitigarbha]: http://en.wikipedia.org/wiki/Ksitigarbha
[LZ77]: http://en.wikipedia.org/wiki/LZ77_and_LZ78_%28algorithms%29
[Lain]: https://en.wikipedia.org/wiki/Serial_Experiments_Lain
[Langton's Ant]: http://en.wikipedia.org/wiki/Langton's_ant
[Laws of Form]: http://en.wikipedia.org/wiki/Laws_of_Form
[Lelouch]: http://en.wikipedia.org/wiki/Lelouch_Lamperouge
[LenPEG]: http://www.dangermouse.net/esoteric/lenpeg.html
[Lossless Data Compression]: http://en.wikipedia.org/wiki/Lossless_data_compression
[Lucid dreaming]: http://en.wikipedia.org/wiki/Lucid_dreaming
[Lucius Vorenus]: http://en.wikipedia.org/wiki/Lucius_Vorenus_%28Rome_character%29
[MCD]: http://www.alljapaneseallthetime.com/blog/series/mcd-revolution
[MCTB]: http://www.interactivebuddha.com/mctb.shtml
[Mahavira]: https://en.wikipedia.org/wiki/Mahavira
[Mainländer]: http://de.wikipedia.org/wiki/Philipp_Mainl%C3%A4nder
[Makkhali Gosala]: https://en.wikipedia.org/wiki/Makkhali_Gosala
[Marcion]: http://en.wikipedia.org/wiki/Marcion_of_Sinope
[Markan priority]: http://en.wikipedia.org/wiki/Markan_priority
[Mere Addition]: http://en.wikipedia.org/wiki/Mere_addition_paradox
[Michael Persinger]: http://en.wikipedia.org/wiki/Michael_Persinger
[Mirror Test]: http://en.wikipedia.org/wiki/Mirror_test
[Missionary Paradox]: http://everything2.com/title/Missionary+Paradox
[Moral Luck]: http://plato.stanford.edu/entries/moral-luck/
[Moulin Rouge!]: http://en.wikipedia.org/wiki/Moulin_Rouge!
[Multiple Drafts]: http://www.scholarpedia.org/article/Multiple_drafts_model
[Multiple Realizability]: http://en.wikipedia.org/wiki/Multiple_realizability
[Nagarjuna]: http://en.wikipedia.org/wiki/Nagarjuna
[Naraka]: http://en.wikipedia.org/wiki/Naraka
[Nirodha Samapatti]: http://web.mac.com/danielmingram/iWeb/Daniel%20Ingram%27s%20Dharma%20Blog/The%20Blook/2CECD5EA-6058-4428-8DDD-002856C2E28A.html
[Non-Dualism]: http://en.wikipedia.org/wiki/Non-dualism
[Olsenbande]: https://en.wikipedia.org/wiki/Olsen_Gang
[Oreo]: https://en.wikipedia.org/wiki/Oreo
[Original Position]: http://en.wikipedia.org/wiki/Original_position
@ -76,12 +116,20 @@
[Pali Canon]: https://en.wikipedia.org/wiki/Pali_canon
[Paperclipper]: http://wiki.lesswrong.com/wiki/Paperclip_maximizer
[Pascal's Mugging]: http://lesswrong.com/lw/kd/pascals_mugging_tiny_probabilities_of_vast/
[Phantom Time]: http://en.wikipedia.org/wiki/Phantom_time_hypothesis
[Pirates Who Don't Do Anything]: http://tvtropes.org/pmwiki/pmwiki.php/Main/ThePiratesWhoDontDoAnything
[Principle of Charity]: http://en.wikipedia.org/wiki/Principle_of_charity
[Profiling]: http://en.wikipedia.org/wiki/Profiling_(computer_programming)
[Purana Kassapa]: https://en.wikipedia.org/wiki/Purana_Kassapa
[QM table]: http://en.wikipedia.org/wiki/Interpretations_of_quantum_mechanics#Comparison
[Quantum Monadology]: http://cognet.mit.edu/posters/TUCSON3/Yasue.html
[RMS]: http://en.wikipedia.org/wiki/Richard_Stallman
[RTK]: http://en.wikipedia.org/wiki/Remembering_the_Kanji
[Regex 2 problems]: http://www.codinghorror.com/blog/2008/06/regular-expressions-now-you-have-two-problems.html
[Repugnant Conclusion]: http://en.wikipedia.org/wiki/Repugnant_Conclusion
[Risk Aversion]: http://en.wikipedia.org/wiki/Risk_Aversion
[Robert M. Price]: http://robertmprice.mindvendor.com
[Sakadagami]: http://en.wikipedia.org/wiki/Sakadagami
[Sanjaya Belatthaputta]: https://en.wikipedia.org/wiki/Sanjaya_Belatthaputta
[Satan]: http://en.wikipedia.org/wiki/Satan
[Sathya Sai Baba]: http://en.wikipedia.org/wiki/Sathya_Sai_Baba
@ -90,14 +138,24 @@
[Sensates]: http://mimir.net/psmush/sensates.shtml
[Serotonin Syndrome]: http://en.wikipedia.org/wiki/Serotonin_syndrome
[Shangri-la diet]: https://en.wikipedia.org/wiki/The_Shangri-La_Diet
[Shinto]: http://en.wikipedia.org/wiki/Shinto
[Signaling]: http://wiki.lesswrong.com/wiki/Signaling
[Simon Magus]: http://en.wikipedia.org/wiki/Simon_Magus
[Sisyphus]: http://en.wikipedia.org/wiki/The_Myth_of_Sisyphus
[Start of Darkness]: http://tvtropes.org/pmwiki/pmwiki.php/Main/StartOfDarkness
[Status]: http://wiki.lesswrong.com/wiki/Status
[Strawman Has A Point]: http://tvtropes.org/pmwiki/pmwiki.php/Main/StrawmanHasAPoint
[Stromberg]: http://en.wikipedia.org/wiki/Stromberg_%28TV_series%29
[Sutrayana]: http://en.wikipedia.org/wiki/Sutrayana
[Sutrayana]: http://www.rigpawiki.org/index.php?title=Sutrayana
[Tarot Fool]: http://en.wikipedia.org/wiki/The_Fool_%28Tarot_card%29
[Tathagata]: http://en.wikipedia.org/wiki/Tath%C4%81gata
[Ted Kaczynski]: http://en.wikipedia.org/wiki/Ted_Kaczynski
[Theravada]: http://en.wikipedia.org/wiki/Theravada
[Trivialism]: http://en.wikipedia.org/wiki/Trivialism
[Turing Machine]: http://en.wikipedia.org/wiki/Turing_machine
[Unity of Knowledge and Action]: http://www.iep.utm.edu/wangyang/#H4
[Unknown God]: http://en.wikipedia.org/wiki/Unknown_God
[Utility Monster]: https://en.wikipedia.org/wiki/Utility_monster
[VHEMT]: http://en.wikipedia.org/wiki/Voluntary_human_extinction_movement
[Vajrayana]: http://en.wikipedia.org/wiki/Vajrayana
[Vampire RPG]: http://en.wikipedia.org/wiki/Vampire:_The_Masquerade
@ -105,12 +163,17 @@
[Visuddhimagga]: http://en.wikipedia.org/wiki/Visuddhimagga
[Wang Yangming]: http://www.iep.utm.edu/wangyang/
[Wireheading]: http://www.wireheading.com/
[World Population]: http://en.wikipedia.org/wiki/World_population
[Wu Wei]: https://en.wikipedia.org/wiki/Wu_wei
[Yamantaka]: http://en.wikipedia.org/wiki/Yamantaka
[Yana]: http://en.wikipedia.org/wiki/Yana_%28Buddhism%29
[Yotsuba]: http://en.wikipedia.org/wiki/Yotsuba&!
[al-Ghazali]: http://en.wikipedia.org/wiki/Al-Ghazali
[meme]: https://en.wikipedia.org/wiki/Meme
[quadtree]: https://en.wikipedia.org/wiki/Quadtree
[quark]: http://en.wikipedia.org/wiki/Quark_(cheese)
[schächten]: http://en.wikipedia.org/wiki/Shechita
[subs2srs]: http://rtkwiki.koohii.com/wiki/Subs2srs
[unsupervised universe]: http://wiki.lesswrong.com/wiki/Unsupervised_universe
[ジャックと豆の木]: http://en.wikipedia.org/wiki/Jack_and_the_Beanstalk_%281974_film%29
[秒速5センチメートル]: http://en.wikipedia.org/wiki/5_Centimeters_Per_Second

View File

@ -12,7 +12,7 @@
[Klout]: http://klout.com/#/muflax
[LibraryThing]: http://www.librarything.com/profile/muflax
[PredictionBook]: http://predictionbook.com/users/muflax
[Twitter]: http://twitter.com/muflax
[Twitter]: https://twitter.com/#!/muflax
[whatiswrongwith.me]: http://whatiswrongwith.me/muflax
<!-- tweets -->
@ -29,3 +29,4 @@
[daily screenshot]: https://github.com/muflax/scripts/blob/master/daily_screenshot.sh
[fume]: https://github.com/muflax/fume
[fumetrap]: https://github.com/muflax/fumetrap
[github web history]: https://github.com/muflax/scripts/blob/master/google_web_history.rb

View File

@ -3,6 +3,7 @@
[Age of Decadence]: http://www.irontowerstudio.com/
[Alan Dawrst]: http://www.utilitarian-essays.com/suffering-nature.html
[Blackmore Free Will]: http://www.susanblackmore.co.uk/Chapters/Brockman2005.htm
[Blackmore no-self]: http://www.susanblackmore.co.uk/Articles/JCS2012.htm
[Breaking the Spell]: http://www.philosophypress.co.uk/?p=1001
[Bro Epicurus]: http://www.philosophybro.com/2011/03/epicurus-sovran-maxims-summary.html
[Carrier Vegetarianism]: http://freethoughtblogs.com/carrier/archives/87
@ -49,6 +50,28 @@
[suffering per kg]: http://www.utilitarian-essays.com/suffering-per-kg.html
[tripzine]: http://www.tripzine.com/listing.php?smlid=268
[xkcd lego]: https://xkcd.com/659/
[last generation]: http://opinionator.blogs.nytimes.com/2010/06/06/should-this-be-the-last-generation/
[checkmate]: https://s3.amazonaws.com/data.tumblr.com/tumblr_lv1atyRcpy1qj9k6oo1_500.png
[xkcd rocks]: http://xkcd.com/505/
[Anders Shirt]: http://www.katzundgoldt.de/ru_anders.htm
[xkcd atheist]: http://xkcd.com/774/
[Jack Torrent]: http://bakabt.com/154403-jack-and-the-beanstalk-jack-to-mame-no-ki.html
[Breaking Dawn]: http://www.rifftrax.com/rifftrax/twilight-saga-breaking-dawn-pt-1
[Causal Inference]: http://arxiv.org/abs/0804.3678
[Causal Markov]: http://arxiv.org/abs/1002.4020
[Catholic Meditation]: http://www.vatican.va/roman_curia/congregations/cfaith/documents/rc_con_cfaith_doc_19891015_meditazione-cristiana_en.html
[Jaynes Evidence]: http://www.julianjaynes.org/evidence_summary.php
[philosophical health check]: http://www.philosophersnet.com/games/check.php
[xkcd fitocracy]: http://xkcd.com/940/
[LibraryThing challenge]: http://www.librarything.com/topic/82131
[onion horoscope]: http://www.theonion.com/articles/your-horoscopes-week-of-january-10-2012,27001/
[The Attention Revolution]: http://www.amazon.com/Attention-Revolution-Unlocking-Power-Focused/dp/0861712765
[Solomonoff beard]: http://www.scholarpedia.org/article/File:RaySolomonoff2001.jpg
[Turtles]: http://en.wikipedia.org/wiki/Turtles_all_the_way_down
[Kant Song]:http://www.raikoth.net/Stuff/ddis/dsong_kant.html
[King in the Mountain]: http://squid314.livejournal.com/306912.html
[Unbreakable]: http://diabasis.com/2011/06/18/could-there-be-beings-that-are-not-wrong-to-make/
[nutrient-rich sludge]: http://www.penny-arcade.com/comic/2010/1/25/
<!-- LessWrong -->
[LW bipolar]: http://lesswrong.com/lw/6nb/ego_syntonic_thoughts_and_values/4igy
@ -59,6 +82,26 @@
[LW words]: http://lesswrong.com/lw/od/37_ways_that_words_can_be_wrong/
[LessWrong]: http://lesswrong.com
[PlaidX torture]: http://lesswrong.com/lw/5ro/what_bothers_you_about_less_wrong/47ph
[LW belief propagation]: http://lesswrong.com/lw/8ib/connecting_your_beliefs_a_call_for_help/
[LW suicide]: http://lesswrong.com/r/discussion/lw/9jg/how_would_you_talk_a_stranger_off_the_ledge/5vgy
[LW corrupted]: http://lesswrong.com/lw/uv/ends_dont_justify_means_among_humans/
[LW chain]: http://lesswrong.com/lw/99t/can_the_chain_still_hold_you/
[LW impossible]: http://lesswrong.com/lw/up/shut_up_and_do_the_impossible/
[LW not great]: http://lesswrong.com/lw/9p/rationality_its_not_that_great/
[LW emergence]: http://lesswrong.com/lw/iv/the_futility_of_emergence/
[LW group selection]: http://lesswrong.com/lw/kw/the_tragedy_of_group_selectionism/
[LW button]: http://lesswrong.com/lw/59j/how_many_of_me_are_there/3y3u
[LW deontology incomprehension]: http://lesswrong.com/lw/435/what_is_eliezer_yudkowskys_metaethical_theory/3fnj
[LW praise]: http://lesswrong.com/lw/5p2/people_who_want_to_save_the_world/497p
[LW values fulfilled]: http://lesswrong.com/lw/59j/how_many_of_me_are_there/3y2h
[LW wireheading request]: http://lesswrong.com/lw/5ym/natural_wireheadings_formal_request/4a3t
[LW SL5]: http://lesswrong.com/lw/1t0/shock_level_5_big_worlds_and_modal_realism/
[LW date]: http://lesswrong.com/r/discussion/lw/980/singularity_institute_executive_director_qa_2/
[LW minecraft]: http://lesswrong.com/lw/8n9/rationality_quotes_december_2011/5dal
[LW leverage]: http://lesswrong.com/lw/9ar/on_leverage_researchs_plan_for_an_optimal_world/5n74
<!-- Hanson -->
[Hanson smile]: http://www.overcomingbias.com/2009/09/poor-folks-do-smile.html
<!-- software -->
[Creative Commons]: http://creativecommons.org/licenses/by-nc-sa/3.0/de
@ -72,21 +115,36 @@
<!-- gwern -->
[Gwern URL]: http://www.gwern.net/Archiving%20URLs
[Narrowing Circle]: http://www.gwern.net/Notes#the-narrowing-circle
[Gwern anonymity]: http://www.gwern.net/Death%20Note%20Anonymity#mistake-2
<!-- Sister Y -->
[Sister Asymmetry]: http://theviewfromhell.blogspot.com/2008/07/austrian-basement-and-beyond.html
[Sister Epilogue]: http://theviewfromhell.blogspot.de/2010/12/living-in-epilogue-social-policy-as.html
[Sister Y]: http://theviewfromhell.blogspot.com
[The View from Hell]: http://theviewfromhell.blogspot.com
[Sister golem]: http://theviewfromhell.blogspot.com/2010/09/pathetic-golem.html
[Sister Kaldor]: http://theviewfromhell.blogspot.com/2011/01/pareto-kaldor-hicks-and-deserving.html
<!-- Moldbug -->
[How Dawkins got pwned]: http://unqualified-reservations.blogspot.com/2007/10/how-dawkins-got-pwned-part-5.html
[Moldbug Left Right]: http://unqualified-reservations.blogspot.com/2008/06/olxi-truth-about-left-and-right.html
[Moldbug Condensed]: http://www.corrupt.org/columns/martin_regnen/condensed_moldbuggery
<!-- Meaningness -->
[Buddhism for Vampires]: http://buddhism-for-vampires.com
[Chapman Disgust]: http://meaningness.wordpress.com/2011/07/22/disgust-horror-western-buddhism/
[Meaningness]: https://meaningness.wordpress.com/
[BFV Monsters]: http://buddhism-for-vampires.com/we-are-all-monsters
[BFV shadow]: http://buddhism-for-vampires.com/eating-the-shadow
[Protestant Buddhism]: http://meaningness.wordpress.com/2011/06/24/protestant-buddhism/
[Chapman Fiction]: http://meaningness.com/metablog/ken-wilber-boomeritis-artificial-intelligence
[Chapman left out]: http://meaningness.wordpress.com/2011/07/12/what-got-left-out-of-%E2%80%9Cmeditation%E2%80%9D/
[Chapman theravada]: http://meaningness.wordpress.com/2011/07/07/theravada-reinvents-meditation/
<!-- XiXiDu -->
[XiXiDu]: http://kruel.co/
[xixidu search]: https://plus.google.com/106808239073321070854/posts/HMQmaWaJy3u
[xixidu utilitarian]: http://kruel.co/2011/07/24/open-problems-in-ethics-and-rationality/
<!-- internal links -->
[RSS]: /rss.xml

View File

@ -34,3 +34,10 @@
[maggots head]: http://www.liveleak.com/view?i=8d5_1219029078
[reblog]: http://www.youtube.com/watch?feature=player_detailpage&v=roTrYCUhOu0#t=86s
[verboten]: https://www.youtube.com/watch?v=OfPCfqnQlpM
[cat videos]: http://www.youtube.com/watch?v=qpl5mOAXNl4
[Sniper Feelings]: http://www.youtube.com/watch?v=9NZDwZbyDus
[Jack nimble]: http://www.youtube.com/watch?v=e9XKVTNs1g4
[múm]: http://www.youtube.com/watch?v=oHTFmJk7fH0
[The Humans Are Dead]: http://www.youtube.com/watch?v=WGoi1MSGu64
[Stanford Metaethics]: http://www.youtube.com/watch?v=kBdfcR-8hEY
[Schopenhauer Lecture]: http://www.youtube.com/watch?feature=player_detailpage&v=aK4pR1Uatqw#t=1084s

View File

@ -5,7 +5,7 @@ tags:
- deontology
- moralism
techne: :done
episteme: :personal
episteme: :speculation
slug: 2012/02/03/being-immoral/
---

View File

@ -5,10 +5,10 @@ tags:
- algorithmic magic
- consciousness
- ontology
- possibleworldproblems
- #possibleworldproblems
- schizophrenic episodes
techne: :done
episteme: :speculation
episteme: :emotional
slug: 2012/03/08/ontological-therapy/
---
@ -71,9 +71,9 @@ Instead of an explanation, a little play:
I better stop there. That's only a small fragment of the whole mess. I didn't even mention uncertainty about meta-ethics, utility calculations ('cause as XiXiDu has correctly observed, if utilitarianism is right, we never ever get to relax, and have to fully embrace the worst consequences of Pascal's Mugging), how it removes "instances" as meaningful concepts so that "I will clone you and torture the clone" stops being a threat, but "I will make my calculations dependent on your decision" suddenly is, or how all of this fits so perfectly together, you'd think it's all actually true.
What I want to talk about is this: it's completely eating me alive. This is totally basilisk territory. You don't get to ever die (this really bums me out because I don't like being alive), you have to deal with everything at once right now (no FAI to save you, not even future-you), any mistake causes massive harm (good luck being perfect) and really, normalcy is impossible. How can you worry about bloody coffee or sex if *all of existence* is at stake because algorithmic dependencies entangle you with so vast a computational space? You have to deal with not just Yahweh, but *all possible gods*, and you are watching [cat videos](http://www.youtube.com/watch?v=qpl5mOAXNl4)? Are you *completely insane*?!
What I want to talk about is this: it's completely eating me alive. This is totally basilisk territory. You don't get to ever die (this really bums me out because I don't like being alive), you have to deal with everything at once right now (no FAI to save you, not even future-you), any mistake causes massive harm (good luck being perfect) and really, normalcy is impossible. How can you worry about bloody coffee or sex if *all of existence* is at stake because algorithmic dependencies entangle you with so vast a computational space? You have to deal with not just Yahweh, but *all possible gods*, and you are watching [cat videos][])? Are you *completely insane*?!
This is not just unhealthy. This is "I'm having a mental breakdown, someone give me the anti-psychotics please". I've tried this [belief propagation thing](http://lesswrong.com/lw/8ib/connecting_your_beliefs_a_call_for_help/). As a result, I don't belief in time, selves, causality, simplicity, physics, plans, goals, ethics or anything really anymore. I have absolutely no ground to stand on, nothing I can comfortably just believe, no idea how to make any decision at all. I can't even make total skepticism work because skepticism itself is an artifact of inference algorithms and [moral luck](http://en.wikipedia.org/wiki/Moral_luck) just pisses on your uncertainty.
This is not just unhealthy. This is "I'm having a mental breakdown, someone give me the anti-psychotics please". I've tried this [belief propagation thing][LW belief propagation]. As a result, I don't belief in time, selves, causality, simplicity, physics, plans, goals, ethics or anything really anymore. I have absolutely no ground to stand on, nothing I can comfortably just believe, no idea how to make any decision at all. I can't even make total skepticism work because skepticism itself is an artifact of inference algorithms and [moral luck][Moral Luck] just pisses on your uncertainty.
*I hate this whole rationality thing*. If you actually take the basic assumptions of rationality seriously (as in Bayesian inference, complexity theory, algorithmic views of minds), you end up with an utterly insane universe full of mind-controlling superintelligences and impossible moral luck, and not a nice "let's build an AI so we can fuck catgirls all day" universe. The worst that can happen is not the extinction of humanity or something that mundane - instead, you might piss off a whole pantheon of jealous gods and have to deal with them *forever*, or you might notice that *this has already happened* and you are already being computationally pwned, or that *any bad state you can imagine exists*. Modal fucking realism.
@ -85,7 +85,7 @@ I think the bloody continentals were right all along. Analytical philosophy is f
You might try the "I am the instantiation of an algorithm" sleight-of-hand, but that's really problematic. Do you also believe God has given you information about the Absolute Encoding Scheme? (If yes, want some of my anti-psychotics?) How can you know what spatial arrangement of particles "encodes" what particular algorithm? This is an unsolvable problem.
But worse than that, even *if* you could do it, I don't think you actually grasp the implications of such a view. Here's [Susan Blackmore](http://www.susanblackmore.co.uk/Articles/JCS2012.htm), giving an eloquent description of how the position is typically envisioned:
But worse than that, even *if* you could do it, I don't think you actually grasp the implications of such a view. Here's [Susan Blackmore][Blackmore no-self], giving an eloquent description of how the position is typically envisioned:
> This "me" that seems so real and important right now, will very soon dissipate and be gone forever, along with all its hopes, fears, joys and troubles. Yet the words, actions and decisions taken by this fleeting self will affect a multitude of future selves, making them more or less insightful, moral and effective in what they do, as well as more or less happy.
@ -134,4 +134,4 @@ And with this, muflax felt enlightened.
For a moment, that is.
Because when you doubt your thought processes because you suspect they are emotionally exploiting you... and you reach a conclusion based on an enlightened state of mind you feel when thinking this conclusion... well, then you ain't paying much attention.
Because when you doubt your thought processes because you suspect they are emotionally exploiting you... and you reach a conclusion based on an enlightened state of mind you feel when thinking this conclusion... well, then you ain't paying much attention.

View File

@ -9,7 +9,7 @@ episteme: :speculation
slug: 2012/01/28/simplifying-the-simulation-hypothesis/
---
Just slightly too long for [Twitter](https://twitter.com/#!/muflax): Everyone who has experimented with lucid dreaming knows that a computer the size of a coconut, primarily designed to climb trees, is enough to simulate worlds of sufficient detail to convince a mind that it is in a full world, containing many other minds it can communicate with.
Just slightly too long for [Twitter][]: Everyone who has experimented with lucid dreaming knows that a computer the size of a coconut, primarily designed to climb trees, is enough to simulate worlds of sufficient detail to convince a mind that it is in a full world, containing many other minds it can communicate with.
This should dramatically lower our bound of the necessary computational power of a computer simulating *you*.
@ -17,4 +17,4 @@ Ask not how expensive it might be to simulate the whole universe you see with it
Also, if it is easier to fool you than to build a whole world, then what evidence do you have of other minds? If there are no other minds, are there still anthropic puzzles? If the reference class is small enough, birth rank stops being surprising.
But do not consider the thought that, like in a dream, it is your own expectation that shapes the world, for then you would have to answer why you would imagine a world like this, so unlikely and wasteful, as if you wanted to distract yourself from solipsism. This thought brings only madness.
But do not consider the thought that, like in a dream, it is your own expectation that shapes the world, for then you would have to answer why you would imagine a world like this, so unlikely and wasteful, as if you wanted to distract yourself from solipsism. This thought brings only madness.

View File

@ -4,7 +4,6 @@ date: 1970-01-01
tags: []
techne: :wip
episteme: :speculation
slug: ?p=973
---
Recently,
@ -18,4 +17,4 @@ Fundamentally, there are several arguments for antinatalism, and only one of tho
It would require that you have the technology to destroy all of humanity, but *not* to improve circumstances.
*That's a pretty small margin of error.* You better have a good argument for it.
*That's a pretty small margin of error.* You better have a good argument for it.

View File

@ -5,12 +5,13 @@ tags:
- antinatalism
- i do what i must because i can
techne: :done
episteme: :speculation
episteme: :fiction
slug: 2012/01/19/introducing-antinatalist-antelope/
---
<img alt="" src="http://25.media.tumblr.com/tumblr_ly1vbmddTG1rndvvro1_400.jpg" class="aligncenter" width="400" height="400" />
<%= image("tumblr_ly1vbmddTG1rndvvro1_400.jpg", "Antinatalist Antelope") %>
<img alt="" src="" class="aligncenter" width="400" height="400" />
(go to [my tumblr](http://antinatalism.tumblr.com/) for more)
(go to [my tumblr][Antinatalism Tumblr] for more)
Somebody had to do it, and that somebody might as well be me.
Somebody had to do it, and that somebody might as well be me.

View File

@ -10,7 +10,7 @@ episteme: :speculation
slug: 2012/02/15/sunk-cost-fallacy-assumes-a-theory-of-time/
---
Just read this [on LW](http://lesswrong.com/r/discussion/lw/9jg/how_would_you_talk_a_stranger_off_the_ledge/5vgy) (emphasis mine):
Just read this [on LW][LW suicide] (emphasis mine):
> > The treatability of depression, as defined by the likelihood that you eventually get these people to claim they're better, doesn't tell me how much they suffered before getting to this point, whether they would voluntarily go through it again to survive, and what their future risks of recidivism are.
>
@ -31,4 +31,4 @@ More typically people go with 2), but then the moment of evaluation is always ir
This generalizes to most sunk cost fallacies of course, not just lives worth continuing. If a project is worth working on, it is always so, or never so. How many resources you put into it or how much progress you have made is irrelevant.
I don't know if this is an important argument *for* antinatalism and suicide or *against* B-Theory. Meh, modus tollens, modus ponens, right?
I don't know if this is an important argument *for* antinatalism and suicide or *against* B-Theory. Meh, modus tollens, modus ponens, right?

View File

@ -1,5 +1,6 @@
---
title: The Asymmetry, an Evolutionary Explanation
alt_titles: [Asymmetry Evolutionary]
date: 2012-01-28
tags:
- antinatalism
@ -10,7 +11,7 @@ episteme: :speculation
slug: 2012/01/28/the-asymmetry-an-evolutionary-explanation/
---
> [W]e think it is wrong to bring into the world a child whose prospects for a happy, healthy life are poor, but we don't usually think the fact that a child is likely to have a happy, healthy life is a reason for bringing the child into existence. This has come to be known among philosophers as "the asymmetry" and it is not easy to justify. -- [source](http://opinionator.blogs.nytimes.com/2010/06/06/should-this-be-the-last-generation/)
> [W]e think it is wrong to bring into the world a child whose prospects for a happy, healthy life are poor, but we don't usually think the fact that a child is likely to have a happy, healthy life is a reason for bringing the child into existence. This has come to be known among philosophers as "the asymmetry" and it is not easy to justify. -- [source][last generation]
It just hit me how *obvious* an evolutionary explanation for the asymmetry is. Azathoth doesn't give a shit about children's well-being and has no interest at all to make *us* care. But what the Mad Designer *does* care about is a worthwhile investment. Having children is expensive, especially for the mother. If resources are short, it might well be worth it to abort a child instead of bringing it to term. (This happens all the time.) If we think a child is particularly likely to be sick, it will just impose a cost on us and no benefit. So we feel bad about it, so that we may do something about it. No such feedback is necessary to make children in general.
@ -18,4 +19,4 @@ The asymmetry isn't about *potential people*. It's about *how we can benefit fro
We should therefore suspect that the asymmetry is stronger when the potential people have reduced fitness, but not when they are simply dissatisfied. As far as I can tell, this is the case. People seem more willing to be apathetic about someone being born into a dead-end career than about someone being very sick, even though poverty creates much more suffering.
I suspect more and more that *any* talk of harm and benefit is wrong and has nothing to do with true morality. We are not just running on [corrupted hardware](http://lesswrong.com/lw/uv/ends_dont_justify_means_among_humans/), but *evil* hardware.
I suspect more and more that *any* talk of harm and benefit is wrong and has nothing to do with true morality. We are not just running on [corrupted hardware][LW corrupted], but *evil* hardware.

View File

@ -7,10 +7,9 @@ tags:
- non-dualism
techne: :wip
episteme: :speculation
slug: ?p=885
---
Cellular automaton implies no green, but green, therefore no mind. [Checkmate](http://24.media.tumblr.com/tumblr_lv1atyRcpy1qj9k6oo1_500.png), materialists!
Cellular automaton implies no green, but green, therefore no mind. [Checkmate][checkmate], materialists!
Ok, that was the short version, now the long version.
@ -22,11 +21,11 @@ The argument is fundamentally very similar to Chalmers P-Zombie argument. (Wait!
You can build some interesting patterns in GoL:
<img alt="" src="http://upload.wikimedia.org/wikipedia/commons/e/e5/Gospers_glider_gun.gif" class="aligncenter" width="250" height="180" />
<%= image("Gospers_glider_gun.gif", "Glider Gun") %>
But wait, there's more! GoL is actually turing-complete. You can run any kind of computation on the board. Here's an actual Turing machine implementation:
<a href="http://upload.wikimedia.org/wikipedia/commons/0/05/Turing_Machine_in_Golly.png"><img alt="" src="http://upload.wikimedia.org/wikipedia/commons/thumb/0/05/Turing_Machine_in_Golly.png/379px-Turing_Machine_in_Golly.png" class="aligncenter" width="379" height="212" /></a>
<%= image("Turing_Machine_in_Golly.png", "TM") %>
Don't underestimate these little automatons. They are seriously powerful. Anything your PC can do, they can do. (Well, not always fast, but sure.) They have some cool specific uses in biology, cryptography and so on. They aren't just pretty toys.
@ -40,7 +39,7 @@ We have a discrete, infinite, 2-dimensional space. Each point in that space has
What neat properties can we see? Well, it's all deterministic. No probabilities involved at all. Furthermore, there are no hidden variables or unique histories. You can just take the board at any point in time you want and start running it from there. You don't need to know anything about its past at all. The computations are all nicely local, both in space and time.
There are also no individual particles. In fact, there are no particles at all. (You could say there are *relations between particles*, without there being any actual particles.) You only have a space that consists of points that have possible states. That's all. There is no "on particle" traveling over the board. It might look like that, as patterns get propagated, but that's only an additional abstraction you might use to understand what's going on. The board has no movement, only state changes. ([Zeno made that point a long time ago.](http://en.wikipedia.org/wiki/Zeno%27s_paradoxes#The_arrow_paradox))
There are also no individual particles. In fact, there are no particles at all. (You could say there are *relations between particles*, without there being any actual particles.) You only have a space that consists of points that have possible states. That's all. There is no "on particle" traveling over the board. It might look like that, as patterns get propagated, but that's only an additional abstraction you might use to understand what's going on. The board has no movement, only state changes. ([Zeno made that point a long time ago.][Arrow Paradox])
Furthermore, one could eliminate time by thinking of the possible states of the whole board as arranged in a directed graph, like so:
@ -48,7 +47,7 @@ Furthermore, one could eliminate time by thinking of the possible states of the
If you think about it that way, then there isn't an objective time and no privileged board. There are just ((in)finitely many) board configurations, and they are causally linked, as determined by the transition rule we decided on. So you can look at any board and decide, if I apply this rule, which boards can I reach, and which boards can reach me? (Why would you prefer a timeless setup over a timed one? Because it's algorithmically simpler. You don't have to specify "these are the rules, and only *this* board and its descendants exist", you just say "all boards exist". The downside is, you now have all boards flying around and they require many more resources. But the rules are simpler. It's a trade-off. For our purposes, both approaches are fine. This argument works either way.)
Are we missing anything? No. I can totally run this now. This is literally all I need to know to write a program that runs the Game of Life. I could also run it using a Go board or [rocks in the desert](http://xkcd.com/505/). Causally speaking, we're *done*.
Are we missing anything? No. I can totally run this now. This is literally all I need to know to write a program that runs the Game of Life. I could also run it using a Go board or [rocks in the desert][xkcd rocks]. Causally speaking, we're *done*.
Now where's the conscious awareness?
@ -64,25 +63,25 @@ It could be an aspect of the rules. But the rules are extremely simple. "If 3 or
We have one last remaining thing - all of state-space. The whole board could have mental states. Certainly a plausible guess. But then, wouldn't you expect mental phenomena to always be global? And unless you are the solipsist, you probably think there is more than one mind in the universe. So that's not good either.
It's as if minds would be constrained to a certain subset of cells, a certain section on the board. But where do these borders come from? They are not in the rules. The cells don't know them. Where are they coming from? There would have to be a *separate* set of rules, *additional* to everything we know, that determine what states are mental and what aren't. That's property dualism. (Chalmers defends it. Many physicalists are property dualists in denial. I'm not particularly fond of it, personally. I don't like [dualisms](http://en.wikipedia.org/wiki/Non-dualism).)
It's as if minds would be constrained to a certain subset of cells, a certain section on the board. But where do these borders come from? They are not in the rules. The cells don't know them. Where are they coming from? There would have to be a *separate* set of rules, *additional* to everything we know, that determine what states are mental and what aren't. That's property dualism. (Chalmers defends it. Many physicalists are property dualists in denial. I'm not particularly fond of it, personally. I don't like [dualisms][Non-Dualism].)
Or you simply deny mental states. It's the obvious implication, really. If you didn't know that consciousness existed, if you were some computer scientist from a P-Zombie universe without mental phenomena, would you ever suspect any? Probably not. And just as naturally, why not dismiss all this talk about "experience" as confused. Take a thorough third-person perspective and get rid of consciousness. (Dennett seems to try this, though I can't make sense of half the stuff he says.)
There's one last possibility. You might say that the mental states are in the *computation*. It's not the actual machine that matters, it's the causal entanglement in the software that runs on it. But if you take this view, then what do you need the machine for? You really don't. You don't need instances, don't need worlds at all. You just need raw math, just dependencies. It's all there in the decision theory. [And as much sympathy as I have for this position](http://blog.muflax.com/2012/03/08/ontological-therapy/), that's still no physicalism, certainly no materialism. It's algorithmic idealism.
There's one last possibility. You might say that the mental states are in the *computation*. It's not the actual machine that matters, it's the causal entanglement in the software that runs on it. But if you take this view, then what do you need the machine for? You really don't. You don't need instances, don't need worlds at all. You just need raw math, just dependencies. It's all there in the decision theory. [And as much sympathy as I have for this position][Ontological Therapy], that's still no physicalism, certainly no materialism. It's algorithmic idealism.
Here's another way to look at it. Imagine an infinite board filled with a properly random arrangement of cells. Any sub-pattern you can think of occurs *somewhere* on the board. If (non-eliminative) materialism is right, we should be able to do the following:
We pick a specific location and zoom in. In this snapshot, there is no conscious mind.
<a href="http://blog.muflax.com/wp-content/uploads/2012/03/1.png"><img src="http://blog.muflax.com/wp-content/uploads/2012/03/1.png" alt="" title="1" width="500" height="500" class="aligncenter size-full wp-image-895" /></a>
<%= image("gol_1.png", "1") %>
But then as we zoom out more (and this is slightly misleading because we would have to zoom out *a lot*), eventually we would observe a conscious mind.
<a href="http://blog.muflax.com/wp-content/uploads/2012/03/2.png"><img src="http://blog.muflax.com/wp-content/uploads/2012/03/2.png" alt="" title="2" width="500" height="500" class="aligncenter size-full wp-image-896" /></a>
<%= image("gol_2.png", "2") %>
And as we zoom out *even more*, other minds would appear, separate from the first one.
<a href="http://blog.muflax.com/wp-content/uploads/2012/03/3.png"><img src="http://blog.muflax.com/wp-content/uploads/2012/03/3.png" alt="" title="3" width="500" height="500" class="aligncenter size-full wp-image-897" /></a>
<%= image("gol_3.png", "3") %>
What property *in the cellular automaton* do we use to draw these boundaries? Is there any reason to say *these* boundaries are conscious, but if we shift them all one cell to the left, they aren't? Excuse me, but I'm invoking the argument from incredulity here.
@ -90,4 +89,5 @@ Now *if* there were a way to connect certain cells, if they shared a common stat
And if you haven't been screaming "But muflax, you overlooked obvious feature X!" for a couple of paragraphs (and if so, please let me know), then I'm done. Case closed.
Abandon materialism all ye who experience green.
Abandon materialism all ye who experience green.

View File

@ -1,38 +0,0 @@
---
title: Insight or Delusion?
date: 1970-01-01
tags: []
techne: :wip
episteme: :speculation
slug: ?p=859
---
Here are some statements. Some of them are symptoms of mental illnesses, others are important philosophical positions. Can you tell which is which?
(Warning: Not suitable for minors and social constructivists. Playing "Insight or Delusion?" might induce postmodernism. If truth seems unknowable for more than 4 hours, call your metaphysician.)
1. I am not the same person I was 5 minutes ago.
2. "I" don't exist.
3. There are many near-identical copies of me, acting independently.
4.
5. There is no such thing as a "present". All moments in time are equally real.
Which of these are the delusions and which are the insights?
You probably guessed it already, but of course *all of them are both*. Here's the list:
1. There are no persistent selfs, only person-moments (generally coming from [materalism](http://en.wikipedia.org/wiki/Materialism)). / X
2. [Anatta](http://en.wikipedia.org/wiki/Anatta) / [Cotard delusion](http://en.wikipedia.org/wiki/Cotard_delusion).
3. Many-worlds interpretation / [subjective doubles](http://en.wikipedia.org/wiki/Syndrome_of_subjective_doubles).
4. Reductionism / no object permanence.
5. B-Theory of time / shrooms. (Time loops are fun.)
6.

View File

@ -6,11 +6,11 @@ tags:
- discordianism
- NAMBLA
techne: :done
episteme: :speculation
episteme: :believed
slug: 2012/02/20/crackpot-beliefs-the-theory/
---
Says [Wiki-sama](http://en.wikipedia.org/wiki/Crank_%28person%29):
Says [Wiki-sama][Crank]:
> Crank magnetism is a term popularized by physiologist and blogger Mark Hoofnagle to describe the propensity of cranks to hold multiple irrational, unsupported or ludicrous beliefs that are often unrelated to one another. Crank magnetism may be considered to operate wherever a single person propounds a number of unrelated denialist conjectures, poorly supported conspiracy theories, or pseudoscientific claims.
@ -50,7 +50,7 @@ If we now accept TA over PA, and still reject LA, we arrive at Discordianism. Be
> GP: How can that be?
> M2: I don't know man, I didn't do it.
Yet, I find a full rejection of the Loyalty Axiom distasteful. Having *no* commitment to beliefs is too weak a position. Maybe there exists a middle path, an improved version, just as the substitution of motives that gets us from [PA](http://en.wikipedia.org/wiki/Laws_of_Form) to TA?
Yet, I find a full rejection of the Loyalty Axiom distasteful. Having *no* commitment to beliefs is too weak a position. Maybe there exists a middle path, an improved version, just as the substitution of motives that gets us from [PA][Laws of Form] to TA?
Therefore I propose:
@ -58,7 +58,7 @@ Therefore I propose:
You can think of RPA as respecting the archetypal ideal of a belief. You can change it, but it must be done in appropriate ways. You can [deconstruct](http://tvtropes.org/pmwiki/pmwiki.php/Main/Deconstruction) a belief, but you must be true to its essence, its core motivation. You can't just make a cynical version and call it a day. You must truly explore the belief.
However, and this is where the ritual aspect comes in, a deconstructed (or revised) belief is not suited for actual use. You must either abandon the practice, or preferably *reconstruct* it, staying true to its innermost motivation and context. Only then can you once again use the belief for its intended purpose. For a clear demonstration of this principle, watch [Hot Fuzz](http://en.wikipedia.org/wiki/Hot_Fuzz).
However, and this is where the ritual aspect comes in, a deconstructed (or revised) belief is not suited for actual use. You must either abandon the practice, or preferably *reconstruct* it, staying true to its innermost motivation and context. Only then can you once again use the belief for its intended purpose. For a clear demonstration of this principle, watch [Hot Fuzz][].
Thus we arrive at Reformed Crackpottery, the framework for the self-aware crank. Just as Discordianism is calling the Aneristic Bluff, so is Orthodox Crackpottery calling the Meta-Aneristic Bluff. As it is written:
@ -82,4 +82,4 @@ To believe that Some Beliefs Are True is the Aneristic Delusion, to believe that
Reformed Crackpottery calls out both bluffs, by respecting the integrity of beliefs, while making it transparent that they are arbitrary constructs, not reflective of reality. All beliefs are crackpot beliefs, good cranks are just honest about it.
This is the Nondual Delusion.
This is the Nondual Delusion.

View File

@ -10,19 +10,19 @@ tags:
- unknown god
- way too many lists
techne: :done
episteme: :speculation
episteme: :broken
slug: 2012/01/17/es-gibt-leute-die-sehen-das-anders/
---
("There are some people who disagree.", [obligatory T-shirt](http://www.katzundgoldt.de/ru_anders.htm))
("There are some people who disagree.", [obligatory T-shirt][Anders Shirt])
# The Case For Bias
Coming from a kind of Hansonian and Tantric perspective, there aren't such things as "good" and "bad" goals. We might - for game-theoretic reasons - publicly approve of only some of our goals, but whatever we want, we simply want, and it's wrong to say that "I wish I didn't want X". Embrace your [monstrosity](http://buddhism-for-vampires.com/we-are-all-monsters).
Coming from a kind of Hansonian and Tantric perspective, there aren't such things as "good" and "bad" goals. We might - for game-theoretic reasons - publicly approve of only some of our goals, but whatever we want, we simply want, and it's wrong to say that "I wish I didn't want X". Embrace your [monstrosity][BFV Monsters].
I can't deny that I'm a contrarian. Meta-contrarian, in fact. I *like* to disagree with the intellectual mainstream, and I can't even deny that I derive some of my values solely from the fact that The Establishment(tm) doesn't like them. So I thought... maybe I should try to be a *better* contrarian?
(That doesn't mean that *all* my unusual or controversial views are contrarianism. I really would like meditation to be useful, instrumentally and spiritually, and have spent large chunks of my life trying to make it work. Unfortunately, some stuff simply doesn't work or is [misleading](http://blog.muflax.com/2012/01/04/why-you-dont-want-vipassana/). That I call myself an "atheist divine command theorist" nowadays is not [an attempt to disagree](http://xkcd.com/774/) with *both* atheists and theists, but simply derives from the fact that I *really want* theism to be true, but it just isn't, so I'm trying to salvage as many features as I can without going too crazy in the process.)
(That doesn't mean that *all* my unusual or controversial views are contrarianism. I really would like meditation to be useful, instrumentally and spiritually, and have spent large chunks of my life trying to make it work. Unfortunately, some stuff simply doesn't work or is [misleading][Why you don't want Vipassana]. That I call myself an "atheist divine command theorist" nowadays is not [an attempt to disagree][xkcd atheist] with *both* atheists and theists, but simply derives from the fact that I *really want* theism to be true, but it just isn't, so I'm trying to salvage as many features as I can without going too crazy in the process.)
Believing only true things just for the sake of truth is mostly a confused value. The reason you should overcome your biases is not so you only believe true things. Instead, false beliefs *on human hardware* have some typical failure modes that will fuck you up. *This* you should avoid. Fundamentally, there are two problems:
@ -32,13 +32,13 @@ Believing only true things just for the sake of truth is mostly a confused value
So how would you try to be contrarian without giving up all your rationality and its benefits? You want to believe unusual things, but not end up praying your cancer away. I think a simple way to do it is to merely adjust your priors. Keep all your Bayes (Peace Be Upon Him), but a priori favor contrarian hypotheses.
Example. There's a pretty famous disagreement if Jesus is a mythological figure or a (heavily distorted) historical failed preacher. Both are reasonable positions and there is good evidence for both. (Other views like the Zombie Jew are all nonsense.) If you were an ideal and unmotivated Bayesian, both positions would probably be reasonably similar in probability, maybe within 20% of each other. Which you favor would depend mostly on your prior. Is a historical religious founder whose message seriously got out of hand more likely than a cult making up mythological being and later historicizing it? There are certainly examples of both and it's not immediately obvious which is better as a general view. This is your chance as a contrarian! Simply adjust your prior slightly so that the more controversial view wins. Keep all the evidence and lines of reasoning in place and simply believe that, all else being equal, a myth-to-history is more likely than a failed-leader-to-hero-myth, at 3:2 odds maybe. Wham, you're a mythicist, don't have to live in a [magical parallel universe](http://en.wikipedia.org/wiki/Biblical_inerrancy) in which most evidence doesn't exist, but still get to be someone who disagrees.
Example. There's a pretty famous disagreement if Jesus is a mythological figure or a (heavily distorted) historical failed preacher. Both are reasonable positions and there is good evidence for both. (Other views like the Zombie Jew are all nonsense.) If you were an ideal and unmotivated Bayesian, both positions would probably be reasonably similar in probability, maybe within 20% of each other. Which you favor would depend mostly on your prior. Is a historical religious founder whose message seriously got out of hand more likely than a cult making up mythological being and later historicizing it? There are certainly examples of both and it's not immediately obvious which is better as a general view. This is your chance as a contrarian! Simply adjust your prior slightly so that the more controversial view wins. Keep all the evidence and lines of reasoning in place and simply believe that, all else being equal, a myth-to-history is more likely than a failed-leader-to-hero-myth, at 3:2 odds maybe. Wham, you're a mythicist, don't have to live in a [magical parallel universe][Inerrancy] in which most evidence doesn't exist, but still get to be someone who disagrees.
"Isn't this *evil*? You're actively advocating sophistry!" Yes, but it's *efficient* sophistry. You are a monster. Don't feel guilty about it, but do a good job. As a [wise man](http://www.youtube.com/watch?v=9NZDwZbyDus) once said, "Feelins'? Look mate, you know who has a lot of feelings? Blokes what bludgeon their wife to death with a golf trophy. Professionals have standards.". If it makes you feel any better, there is no practical way to choose an unbiased prior in the first place. The only known unbiased prior is the universal prior (explanation soon in the SI series) and it's incomputable, even for very simple examples. You *will* be biased, so why not be explicit about it and be biased in ways that benefit you?
"Isn't this *evil*? You're actively advocating sophistry!" Yes, but it's *efficient* sophistry. You are a monster. Don't feel guilty about it, but do a good job. As a [wise man][Sniper Feelings] once said, "Feelins'? Look mate, you know who has a lot of feelings? Blokes what bludgeon their wife to death with a golf trophy. Professionals have standards.". If it makes you feel any better, there is no practical way to choose an unbiased prior in the first place. The only known unbiased prior is the universal prior (explanation soon in the SI series) and it's incomputable, even for very simple examples. You *will* be biased, so why not be explicit about it and be biased in ways that benefit you?
# Skillful Trolling
Being a contrarian and being a troll is closely related. The only real difference is that a contrarian internalizes their trollish views, while a troll drops them outside a debate. But if you aren't trolling someone, why are you a contrarian in the first place? No-one just believes the [Dark Ages never happened](http://en.wikipedia.org/wiki/Phantom_time_hypothesis) in private. They *have* to publicize it and probably start a flame war over it.
Being a contrarian and being a troll is closely related. The only real difference is that a contrarian internalizes their trollish views, while a troll drops them outside a debate. But if you aren't trolling someone, why are you a contrarian in the first place? No-one just believes the [Dark Ages never happened][Phantom Time] in private. They *have* to publicize it and probably start a flame war over it.
It is therefore an integral part of being a contrarian that you are competent in your subject so you can actually debate someone. If you simply represent mainstream views, you can always appeal to authority. (And where the mainstream is usually right, you're certainly justified in doing so. I'm not dissing the mainstream *in general*.) You can do a good job debating an anti-vaccine crank even if you know very little about medicine or biology. You can simply point to studies, a uniform expert consensus and clear results.
@ -46,13 +46,13 @@ But this shit doesn't fly if you think the mainstream is wrong. You *will* get s
So you have to put a lot of effort into not just understanding mainstream views, but also deeply understanding your contrarian positions, and how to explain them to outsiders. This is a lot of work. You better be ready to dedicate a serious amount of your time to it. You can't be contrarian about a hundred things. Focus.
(Fortunately all contrarians I know *like* this work and don't face akrasia in these fields. Which btw is good evidence against "akrasia is a general limitation" and "akrasia arises from modularity", and evidence for "akrasia is what being a hypocrite, but not acknowledging it feels from the inside". [Eat your shadow](http://buddhism-for-vampires.com/eating-the-shadow).)
(Fortunately all contrarians I know *like* this work and don't face akrasia in these fields. Which btw is good evidence against "akrasia is a general limitation" and "akrasia arises from modularity", and evidence for "akrasia is what being a hypocrite, but not acknowledging it feels from the inside". [Eat your shadow][BFV shadow].)
# Let's Talk About Me
Enough general arguments. This is my blog and so let's talk about me. (Why not embrace a certain level of narcissism? If public writing works, but it doesn't seem to depend on feedback (most of my writing I never advocate and is therefore never commented on, which doesn't particularly bother me), then it seems obvious I'm at least partially motivated by potential attention. Might as well acknowledge that and use it to fuel the learning process.)
Recently, I made a series of critical comments on one of [Luke's posts](http://lesswrong.com/lw/99t/can_the_chain_still_hold_you/). I was trying to express a couple of points:
Recently, I made a series of critical comments on one of [Luke's posts][LW chain]. I was trying to express a couple of points:
1. Legislation to abolish slavery had questionable effectiveness and mostly moved slavery to the black market where slaves don't have legal representation. It is analogous to the war on drugs.
@ -66,7 +66,7 @@ I also hold the following beliefs (some not too strongly) which I tried to keep
1. Slavery is not morally wrong. At all. I can find no fault with it. Partial legal property of humans is already acceptable (we call this "being a parent"), so why not of unrelated humans? A state should enforce any contract people want to make, including about buying other humans. I fully support this. (I am less confident about *inheriting* slavery because I'm skeptical of inheritance *in general*. I also find making someone a slave against their will (say through war) problematic (but maybe defensible), but I firmly support the right of people to sell themselves into slavery.)
2. Slaves probably did not suffer *worse* than comparable non-slaves, so from a perspective of harm reduction, slavery is probably not a relevant evil. It gets its bad reputation mostly through [Progressivist](www.corrupt.org/columns/martin_regnen/condensed_moldbuggery) propaganda.
2. Slaves probably did not suffer *worse* than comparable non-slaves, so from a perspective of harm reduction, slavery is probably not a relevant evil. It gets its bad reputation mostly through [Progressivist][Moldbug condensed] propaganda.
3. The definition of slavery is very conspicuously selective. A Roman owning a personal assistant is slavery, but millions of prisoners worldwide working under forced conditions (and often against their will) is not? Prison labour should definitely be included in modern slavery statistics, but that wouldn't make it look so flattering anymore. Are children legally really different from term-limited slaves? (If you agree with animal rights arguments, what about farm animals? It's as if the institution of slavery per se isn't problematic, just when it applies to certain groups of humans under certain conditions.)
@ -100,7 +100,7 @@ So I was thinking. I love history, and I love all the contrarian views associate
1. make political arguments, especially when they present themselves as being persecuted (even when true),
2. link their beliefs to concrete policy,
3. violate [Hanlon's Razor](http://en.wikipedia.org/wiki/Hanlon%27s_Razor).
3. violate [Hanlon's Razor][].
I should experiment with different techniques here.
@ -111,19 +111,19 @@ So I was thinking. I love history, and I love all the contrarian views associate
4. I really need to get my languages in order. I still can't read Latin. This is seriously not acceptable. I also need to re-evaluate my language priorities. I really wish I could read Akkadian, Russian and Chinese these days. Prof. Arguelles is right, you really need to read 10+ languages to meaningfully appreciate world history and literature. (Speaking them, on the other hand, is as useless as ever. I barely even speak German these days.) Translations are fundamentally bullshit for contrarians. Many good texts *won't* be translated, or translations won't be sufficient to establish the cultural context, or they will even seriously distort the text, as Jaynes has shown.
5. Moral philosophy, theology and political theory are actually useful. (I know! I'm as surprised as you.) As it's list day at muflax' blog today, surprisingly influential on my thought over the last year or two have been:
1. Moldbug's resurrection of reactionary thought (Broken as it is, he has shown me that a serious alternative to progressivism *is* possible and that my admiration for... questionable people and institutions has a general moral and historical core and is genuinely worth developing. It does not just derive from [Evil Is Cool](http://tvtropes.org/pmwiki/pmwiki.php/Main/EvilIsCool), but [Strawman Really Has A Point](http://tvtropes.org/pmwiki/pmwiki.php/Main/StrawmanHasAPoint). Many unacceptable views today actually have serious arguments and don't derive from people just being dicks.)
1. Moldbug's resurrection of reactionary thought (Broken as it is, he has shown me that a serious alternative to progressivism *is* possible and that my admiration for... questionable people and institutions has a general moral and historical core and is genuinely worth developing. It does not just derive from [Evil Is Cool][], but [Strawman Really Has A Point][Strawman Has A Point]. Many unacceptable views today actually have serious arguments and don't derive from people just being dicks.)
2. Antinatalists' defense of deontological rights (I found all rights-based morality questionable before I read [Sister Y](http://theviewfromhell.blogspot.com/). Now I take it very seriously and consider it a serious contender for Real True Morality, even potentially Objective Morality.)
2. Antinatalists' defense of deontological rights (I found all rights-based morality questionable before I read [Sister Y][]. Now I take it very seriously and consider it a serious contender for Real True Morality, even potentially Objective Morality.)
3. Divine Command Theory (It's what I actually *wish* I would operate under, which I only understood when I roleplayed an explicit DCTist.)
4. analytical theology (It's surprisingly interesting and relevant as a field, once you reconstruct it from the perspective of computationalism and Turing machines, or as Will Newsome recently called Leibniz' "best of all worlds" argument: "Recursive Universal Dovetailing Measure-Utility Inequality Theorem". Once you accept that the mind might be computation, really weird shit happens as materialist frameworks break down, and you consider acausal interactions and Tegmark universes, and you realize that fundamentally there is no difference between "real" and "hypothetical" scenarios. I may even have found a way to resurrect God. This scares and excites me. At the very least, it might be a strong argument *against* computationalism, which is the only meaningful basis of monist philosophy of mind. Either way, it's very fucked up.)
5. non-protestant religions (That's a shitty name, but there's a certain core of protestantism as observed by [David Chapman](http://meaningness.wordpress.com/2011/06/24/protestant-buddhism/) that repeats itself in other contexts. It's characterized by its lack of ritual, sacredness and worship, and it's focus on (pseudo-)rational thought, equality and everyday life. Once I understood that ritual and worship are meaningful practices, I found a lot of value in them and currently try to integrate them more into my life. This seems to be epistemically dangerous, but so far totally worth it. I'm not sure how much of this benefit is specific to my personality, though, nor how influential this really will turn out to be. I feel like no modern construction of this practice exists and I'm stuck with either resurrected an old religion or building everything from scratch. I really hope Chapman makes lots more progress there.)
5. non-protestant religions (That's a shitty name, but there's a certain core of protestantism as observed by [David Chapman][Protestant Buddhism] that repeats itself in other contexts. It's characterized by its lack of ritual, sacredness and worship, and it's focus on (pseudo-)rational thought, equality and everyday life. Once I understood that ritual and worship are meaningful practices, I found a lot of value in them and currently try to integrate them more into my life. This seems to be epistemically dangerous, but so far totally worth it. I'm not sure how much of this benefit is specific to my personality, though, nor how influential this really will turn out to be. I feel like no modern construction of this practice exists and I'm stuck with either resurrected an old religion or building everything from scratch. I really hope Chapman makes lots more progress there.)
Luckily I have stopped all AI and math research, now that I believe only in monetary support. I can put almost all my skills behind programming, history and theology. I like that.
Let us close with a prayer to an [unknown god](http://en.wikipedia.org/wiki/Unknown_God):
Let us close with a prayer to an [unknown god][Unknown God]:
> I pray to you,
> unknown god,
@ -138,4 +138,4 @@ Let us close with a prayer to an [unknown god](http://en.wikipedia.org/wiki/Unkn
> I accept my penance
> and pray to you,
> unknown god,
> who I eternally shall serve.
> who I eternally shall serve.

View File

@ -10,12 +10,17 @@ episteme: :speculation
slug: 2012/01/04/some-thoughts-on-bicameral-minds/
---
<em>This is a reply to wallowinmaya's comment on my last article. I noticed I kinda wrote an article in disguise, so I'm posting it as one.</em>
<blockquote>The multiple-personality thing is really fascinating. Do you think its been a feature or a bug, all things considered? It seems to me that basically everyone has multiple personalities but only one of them is conscious. Your deep acquaintance with your subconsciousness also explains that you endorse wireheading because most usually “subconscious parts” probably find it good. Its only our conscious, ideal and altruistic self that is against it. Am I totally wrong about this? And if our multiple personalities really have conflicting values that would probably render solutions to moral problems like CEV void, right?</blockquote>
*This is a reply to wallowinmaya's comment on my last article. I noticed I kinda wrote an article in disguise, so I'm posting it as one.*
> The multiple-personality thing is really fascinating. Do you think its been a feature or a bug, all things considered? It seems to me that basically everyone has multiple personalities but only one of them is conscious. Your deep acquaintance with your subconsciousness also explains that you endorse wireheading because most usually “subconscious parts” probably find it good. Its only our conscious, ideal and altruistic self that is against it. Am I totally wrong about this? And if our multiple personalities really have conflicting values that would probably render solutions to moral problems like CEV void, right?
I tend to think of it as a feature, but I'm really used to it, so I'm not exactly an impartial judge. Maybe I'm even less functional and inconsistent than the average person, I don't know. I also don't know if it's really the feature that's unusual or just the way I think about it.
There's a really fascinating book called "The Origin of Consciousness in the Breakdown of the Bicameral Mind" by Julian Jaynes (actually really accessible despite its title and available on library.nu). Basically, he proposes that originally both brain hemispheres were independent minds and that one (the right) commanded the other (the left) through hallucinations, mostly voices. Quote Wiki:
<blockquote>According to Jaynes, ancient people in the bicameral state of mind would have experienced the world in a manner that has some similarities to that of a schizophrenic. Rather than making conscious evaluations in novel or unexpected situations, the person would hallucinate a voice or "god" giving admonitory advice or commands and obey without question: one would not be at all conscious of one's own thought processes per se.</blockquote>
There's a really fascinating book called "The Origin of Consciousness in the Breakdown of the Bicameral Mind" by Julian Jaynes (actually really accessible despite its title and available on library.nu). Basically, he proposes that originally both brain hemispheres were independent minds and that one (the right) commanded the other (the left) through hallucinations, mostly voices.
Quote Wiki:
> According to Jaynes, ancient people in the bicameral state of mind would have experienced the world in a manner that has some similarities to that of a schizophrenic. Rather than making conscious evaluations in novel or unexpected situations, the person would hallucinate a voice or "god" giving admonitory advice or commands and obey without question: one would not be at all conscious of one's own thought processes per se.
The main problem with this Bicameral Mode is that you can't really self-reflect and function outside of rigid hierarchies. So once civilizations got too large, these bicameral minds collapsed and merged into the modern subjective consciousness. So basically, the evolution goes:
1. monkeys want to track other monkeys, so they develop monkey-simulating hardware
@ -35,9 +40,11 @@ Anyway. If some basic form of this is true, then there really is no "unified" mi
Back to Jaynes' idea, the difference between a "self" and an "other" is really just the level of personal associations and names. They are both hallucinations, in the sense that they run as simulated monkeys on the brain. Making a decision is just simulating a monkey that does something and then seeing what happens, only that we recognize that the simulated monkey is us. (Sometimes we fail the mirror test and that's called "I spoke to someone in a dream". When you have a lucid dream, try switching which person you control.)
This association process is not perfectly reliable. Particularly schizophrenics and people on certain drugs have it fail on them. Quote two schizophrenics in Jaynes' book:
<blockquote>Gradually I can no longer distinguish how much of myself is in me, and how much is already in others. I am a conglomerate, a monstrosity, modeled anew each day.
My ability to think and decide and will to do, is torn apart by itself. Finally, it is thrown out where it mingles with every other part of the day and judges what it has left behind. Instead of wishing to do things, they are done by something that seems mechanical and frightening ... the feeling that should dwell within a person is outside longing to come back and yet having taken with it the power to return.</blockquote>
> Gradually I can no longer distinguish how much of myself is in me, and how much is already in others. I am a conglomerate, a monstrosity, modeled anew each day.
>
> My ability to think and decide and will to do, is torn apart by itself. Finally, it is thrown out where it mingles with every other part of the day and judges what it has left behind. Instead of wishing to do things, they are done by something that seems mechanical and frightening ... the feeling that should dwell within a person is outside longing to come back and yet having taken with it the power to return.
Jaynes argues pretty convincingly that the left hemisphere, which is normally in charge of interacting with the outside world, can't *refuse* orders in Bicameral Mode. The right side says *anything* and the left side does it. It can't veto orders at all. That would obviously be easy to exploit, so you need to distinguish between "this is a command" and "this is just talk". One heuristic the left side uses is to only recognizes something as an order when it comes from someone higher up in the status hierarchy. So the right side impersonates high-status figures (gods, kings, parents). (There are almost no cases of someone hallucinating low-status characters! No-one thinks they are hearing voices that belong to a random beggar. It's always gods, kings or something equivalent.)
And that's how Bicameral Consciousness works. Both hemispheres already have extensive hardware to simulate people. They need it just to keep up with local status and tribe associations. So they can re-use this hardware by creating fictitious people (often direct copies of real people at first), run them for a bit and see what results they get. They can even interact with these people (i.e. talk with hallucinations). These simulations then ultimately give a direct order and the brain executes it. Achievement unlocked: complex decision making.
@ -63,9 +70,11 @@ So based on this, the connection between "consciousness" and "goals" is fairly q
On a related note, I don't know how I feel about wireheading anymore. Some days, I think it's the greatest idea ever, on others it looks like an ethical nightmare. I also don't like that AIXI wireheads itself and so screws up our attempts to enslave it. Makes wireheading look like a huge bug from the outside.
Also:
<blockquote>Your deep acquaintance with your subconsciousness also explains that you endorse wireheading because most usually “subconscious parts” probably find it good. Its only our conscious, ideal and altruistic self that is against it. Am I totally wrong about this? And if our multiple personalities really have conflicting values that would probably render solutions to moral problems like CEV void, right?</blockquote>
> Your deep acquaintance with your subconsciousness also explains that you endorse wireheading because most usually “subconscious parts” probably find it good. Its only our conscious, ideal and altruistic self that is against it. Am I totally wrong about this? And if our multiple personalities really have conflicting values that would probably render solutions to moral problems like CEV void, right?
Conflicting values aren't necessarily a big problem in general, I think. The universe is pretty big and there's enough space to satisfy a lot of values at the same time. It would be bad if there were multiple fundamentalists who couldn't accept that anyone else might disagree with them, ever, anywhere.
That's one way I currently think about objective morality - it's an attempt to enforce values when you don't have much power. If my values are Objective(tm), then I have an easy way to force others to comply against their will. (Or at least I can tell myself that, if they only thought rationally, they would have the same values as me.) If all value is subjective and accidental, well, how can I stop someone from eating the wrong kind of ice cream, short of building a Jupiter-sized AI and taking over the universe? So if I were not so insecure, maybe I wouldn't feel so bad about morality.
<em>Anyway. That's the still-in-process thinking I'm currently going through. I'm not sure if I said everything I wanted, but this will have to do for now.</em>
*Anyway. That's the still-in-process thinking I'm currently going through. I'm not sure if I said everything I wanted, but this will have to do for now.*

View File

@ -8,50 +8,58 @@ tags:
- meditation
- wireheading
techne: :done
episteme: :speculation
episteme: :fiction
slug: 2012/01/01/why-this-world-might-be-a-simulation/
---
<blockquote>Many have undertaken to draw up an account of the things that have been fulfilled among us, just as they were handed down to us by those who from the first were eyewitnesses and servants of the word. With this in mind, since I myself have carefully investigated everything from the beginning, I too decided to write an orderly account for you, most excellent Theophilus, so that you may know the certainty of the things you have been taught. -- Luke 1:1-4</blockquote>
> Many have undertaken to draw up an account of the things that have been fulfilled among us, just as they were handed down to us by those who from the first were eyewitnesses and servants of the word. With this in mind, since I myself have carefully investigated everything from the beginning, I too decided to write an orderly account for you, most excellent Theophilus, so that you may know the certainty of the things you have been taught. -- Luke 1:1-4
I wonder. If I wrote a kind of autobiography right now, if I tried to explain to a friend what I have learned in the last couple of years, I wonder, would it sound *believable* to a distant reader?
I mean, just *look* at some of this shit.
One huge source of influence is a dude called <a href="http://yudkowsky.net/">Eliezer Yudkowsky</a>. Eliezer is a Hebrew name meaning "help of my God". A common variant is Eleasar, "God has helped". You might know Eleaser in its Latin form - Lazarus. Who's Lazarus? The guy Jesus famously raised from the dead. What's Eliezer famously advocate? You should sign up for cryonics. A literal resurrection. Come on, the name's a pun, you can't deny it. In fact, it's deliberately not "Eleasar" because he *hasn't* died yet!
One huge source of influence is a dude called [Eliezer Yudkowsky][]. Eliezer is a Hebrew name meaning "help of my God". A common variant is Eleasar, "God has helped". You might know Eleaser in its Latin form - Lazarus. Who's Lazarus? The guy Jesus famously raised from the dead. What's Eliezer famously advocate? You should sign up for cryonics. A literal resurrection. Come on, the name's a pun, you can't deny it. In fact, it's deliberately not "Eleasar" because he *hasn't* died yet!
But more importantly, what's more interesting about this Eliezer than cryonics? He wrote the <a href="http://wiki.lesswrong.com/wiki/Sequences">Less Wrong sequences</a>. Look at the size of that thing! Over a million words! *One* author? Covering quantum physics, meta-ethics, AI, cogsci, evolution and *how to write fiction*? That's totally believable.
But more importantly, what's more interesting about this Eliezer than cryonics? He wrote the [Less Wrong sequences][LW sequences]. Look at the size of that thing! Over a million words! *One* author? Covering quantum physics, meta-ethics, AI, cogsci, evolution and *how to write fiction*? That's totally believable.
Next dude. Also very prolific Less Wrong poster. Called Luke. As in Ecclesiastical Redactor Luke. What's New Testament Luke's real goal? Unifying the Petrine and Pauline sects. Peter, you might remember, emphasized an Old God, the God of the Torah, and its elaborate laws. Paul, on the other hand, taught salvation by faith and a New God, God the Father. The Old God was petty and cruel, but God the Father  brought a radically new message - actual mercy. What's Less Wrong Luke's real goal? Unifying Academia and the Eliezerites. Academia insists on old rules, like peer review and degrees, and its results are mindless and dangerous. If we build the AI that Academia wants, says Eliezer, we would all die. Instead, Eliezer brings a new AI - Friendly AI - and with it a radically new message - actual utopia within our lifetimes.
(Also, Luke is said to be a companion of Paul, the first to preach the gospel of a New God who brings mercy, not judgment. Our Luke is a companion of Eliezer, the first to preach the gospel of Friendly AI, a technology that brings utopia, not existential risk. Luke, in both cases, was the first to bring Paul's message to the masses, after Paul/Eliezer's direct approaches had failed. Oh and our Luke was an Evangelical Christian before he joined Eliezer. Totally a coincidence and not a wink to the audience.)
One *might* suppose that our author simply took the New Testament stories and rewrote them in the framework of AI. Like faith in the gospel stories, the <a href="http://lesswrong.com/lw/up/shut_up_and_do_the_impossible/">rationality that is preached by the Sequences </a>isn't actually *demonstrated*. The gospels aren't instruction manuals or history books. They are *propaganda for new missionaries*. And similarly, Less Wrong's rationality <a href="http://lesswrong.com/lw/9p/rationality_its_not_that_great/">doesn't actually do anything</a>. The Sequences are themselves propaganda - a mission charge, a doctrinal creed maybe - but clearly, they are fiction. (Some are <a href="http://lesswrong.com/lw/iv/the_futility_of_emergence/">outright attempts</a> to <a href="http://lesswrong.com/lw/kw/the_tragedy_of_group_selectionism/">silence a heretical faction</a>.)
One *might* suppose that our author simply took the New Testament stories and rewrote them in the framework of AI. Like faith in the gospel stories, the [rationality that is preached by the Sequences][LW impossible] isn't actually *demonstrated*. The gospels aren't instruction manuals or history books. They are *propaganda for new missionaries*. And similarly, Less Wrong's rationality [doesn't actually do anything][LW not great]. The Sequences are themselves propaganda - a mission charge, a doctrinal creed maybe - but clearly, they are fiction. (Some are [outright attempts][LW emergence] to [silence a heretical faction][LW group selection].)
Another topic. Buddhism. Our poor protagonist - muflax, whose real name, might I add, literally means "<a href="http://en.wikipedia.org/wiki/Crown_of_Thorns">crown of thorns</a>" - struggles years with difficult koans and meditation practices, only to find a [New Teaching](http://www.interactivebuddha.com/mctb.shtml) that brings him to the level of an anagami - a Never-Returner, one of the highest ranks as far as enlightenment goes - within a *year*. Sure you're not selling some cult propaganda? But then, almost perfecting this teaching, muflax realizes that *an even better* teaching exists - tantra. And what's the source of this tantra? A <a href="http://buddhism-for-vampires.com/">vampire novel</a>, written by a <a href="http://meaningness.com/metablog/ken-wilber-boomeritis-artificial-intelligence">clearly fictitious</a> author. Who was once an AI researcher. Yeah, right.
Another topic. Buddhism. Our poor protagonist - muflax, whose real name, might I add, literally means "[crown of thorns][Crown of Thorns]" - struggles years with difficult koans and meditation practices, only to find a [New Teaching][MCTB] that brings him to the level of an anagami - a Never-Returner, one of the highest ranks as far as enlightenment goes - within a *year*. Sure you're not selling some cult propaganda? But then, almost perfecting this teaching, muflax realizes that *an even better* teaching exists - tantra. And what's the source of this tantra? A [vampire novel][Buddhism for Vampires], written by a [clearly fictitious][Chapman Fiction] author. Who was once an AI researcher. Yeah, right.
Might I add that this "muflax" is not a singular person? The text has gone through some serious editing at the least. Look at these quotes, all allegedly by the same person:
<blockquote>
<p style="padding-left: 30px;">Just to make this maximally concrete: if you were given a magic button that, if pressed, caused the world to end five minutes after your death, would you press the button?</p>
[...] yes, I would be mostly indifferent about the button [...] and would press it [for money]. (<a href="http://lesswrong.com/lw/59j/how_many_of_me_are_there/3y3u">source</a>)</blockquote>
And also:
<blockquote>
<p style="padding-left: 30px;">Persons have a right not to be killed; persons who have waived or forfeited that right, and non-persons, are still entities which should not be destroyed absent adequate reason. Preferences come in with the "waived" bit, and the "adequate reason" bit, but even if nobody had any preferences (...somehow...) then it would still be wrong to kill people who retain their right not to be killed (this being the default, assuming the lack of preferences doesn't paradoxically motivate anyone to waive their rights), and still be wrong to kill waived-rights or forfeited-rights persons, or non-persons, without adequate reason. I'm prepared to summarize that as "Killing: generally wrong".</p>
Fascinating. This view is utterly incomprehensible to me. I mean, I understand what you are saying, but I just can't understand <em>how</em> or <em>why</em> you would believe such a thing.
The idea of "rights" as things that societies enact makes sense to me, but universal rights? I'd be interested on what basis you believe this. (A link or other reference is fine, too.) (<a href="http://lesswrong.com/lw/435/what_is_eliezer_yudkowskys_metaethical_theory/3fnj">source</a>)</blockquote>
> > Just to make this maximally concrete: if you were given a magic button that, if pressed, caused the world to end five minutes after your death, would you press the button?
> [...] yes, I would be mostly indifferent about the button [...] and would press it [for money]. ([source][LW button])
And also:
> > Persons have a right not to be killed; persons who have waived or forfeited that right, and non-persons, are still entities which should not be destroyed absent adequate reason. Preferences come in with the "waived" bit, and the "adequate reason" bit, but even if nobody had any preferences (...somehow...) then it would still be wrong to kill people who retain their right not to be killed (this being the default, assuming the lack of preferences doesn't paradoxically motivate anyone to waive their rights), and still be wrong to kill waived-rights or forfeited-rights persons, or non-persons, without adequate reason. I'm prepared to summarize that as "Killing: generally wrong".
> Fascinating. This view is utterly incomprehensible to me. I mean, I understand what you are saying, but I just can't understand *how* or *why* you would believe such a thing.
>
> The idea of "rights" as things that societies enact makes sense to me, but universal rights? I'd be interested on what basis you believe this. (A link or other reference is fine, too.) ([source][LW deontology incomprehension])
Then later:
<blockquote>I praise you for having the wisdom of using a long enough deadline. When I first read your comment, it felt like you were exploiting me, as if you were forcing me to share my limited praise resources. But because I had enough time, I got over myself, realized that this is not a zero-sum game, that this is not an attack on my status and that what you are doing is clever and good.
Well done, I praise you for your right action. (<a href="http://lesswrong.com/lw/5p2/people_who_want_to_save_the_world/497p">source</a>)</blockquote>
> I praise you for having the wisdom of using a long enough deadline. When I first read your comment, it felt like you were exploiting me, as if you were forcing me to share my limited praise resources. But because I had enough time, I got over myself, realized that this is not a zero-sum game, that this is not an attack on my status and that what you are doing is clever and good.
>
> Well done, I praise you for your right action. ([source][LW praise])
And:
<blockquote><a href="http://lesswrong.com/lw/59j/how_many_of_me_are_there/3y2h">I strongly suspect</a> that I don't actually care that my values are fulfilled outside of my experience. I see no reason why anyone would. (<a href="http://lesswrong.com/lw/5ym/natural_wireheadings_formal_request/4a3t">source</a>)</blockquote>
But then:
<blockquote>I always suspected there was something wrong with being happy. [...] I really got this playing Minecraft. In a way it's perfect. It's almost exactly what I thought heaven would be like. (Needs more machinery and no height limit, though.) But when I had built a little house, I realized that there's no point to it. I stared upon the vast landscape, knowing that it would be impossible for me to ever be <em>satisfied</em> with it.
There is peace, but it's the peace of a blank screen. It is not victory. (unpublished draft)</blockquote>
This same muflax has also later written works that rely on some form of <a href="http://blog.muflax.com/2011/12/20/why-im-not-a-vegetarian/://">deontology</a>, something they found "incomprehensible" just a year earlier. Doesn't it seem more likely that these later works are pseudepigraphical, and that the narrative in this "autobiography" is at best a harmonization of different traditions and possibly different persons?
> [I strongly suspect][LW values fulfilled] that I don't actually care that my values are fulfilled outside of my experience. I see no reason why anyone would. ([source][LW wireheading request])
But then:
> I always suspected there was something wrong with being happy. [...] I really got this playing Minecraft. In a way it's perfect. It's almost exactly what I thought heaven would be like. (Needs more machinery and no height limit, though.) But when I had built a little house, I realized that there's no point to it. I stared upon the vast landscape, knowing that it would be impossible for me to ever be *satisfied* with it.
>
> There is peace, but it's the peace of a blank screen. It is not victory. (unpublished draft)</blockquote>
This same muflax has also later written works that rely on some form of [deontology][Why I'm Not A Vegetarian], something they found "incomprehensible" just a year earlier. Doesn't it seem more likely that these later works are pseudepigraphical, and that the narrative in this "autobiography" is at best a harmonization of different traditions and possibly different persons?
Maybe it's all just a myth?
<em>(I've been reading a lot of <a href="http://en.wikipedia.org/wiki/Historical_criticism">Higher Criticism</a> lately. Can you tell?)</em>
*(I've been reading a lot of [Higher Criticism][] lately. Can you tell?)*

View File

@ -6,17 +6,17 @@ tags:
- miracles
- monsters
techne: :done
episteme: :speculation
episteme: :believed
slug: 2012/02/27/a-course-in-miracles-jack-and-the-beanstalk/
---
There are some movies that are amazing and widely recognized as such, like [Moulin Rouge!](http://en.wikipedia.org/wiki/Moulin_Rouge!) or [秒速5センチメートル](http://en.wikipedia.org/wiki/5_Centimeters_Per_Second) (5 Centimeters Per Second).
There are some movies that are amazing and widely recognized as such, like [Moulin Rouge!][] or [秒速5センチメートル][] (5 Centimeters Per Second).
Then there are movies that are unjustly ignored; yet more victims of the intrinsic evil that is our universe. Let's talk about one such movie - [ジャックと豆の木](http://en.wikipedia.org/wiki/Jack_and_the_Beanstalk_%281974_film%29), aka the 1974 anime version of Jack and the Beanstalk.
Then there are movies that are unjustly ignored; yet more victims of the intrinsic evil that is our universe. Let's talk about one such movie - [ジャックと豆の木][]), aka the 1974 anime version of Jack and the Beanstalk.
<img alt="" src="http://upload.wikimedia.org/wikipedia/en/f/f3/JackandtheBeanstalk1974.jpg" class="aligncenter" width="300" height="421" />
<%= image("JackandtheBeanstalk1974.jpg", "Jack poster") %>
Like me, you may have seen it as a child. If not, [BakaBT has a good torrent](http://bakabt.com/154403-jack-and-the-beanstalk-jack-to-mame-no-ki.html) and it's up on TPB and so on as well. Normally I'd advise you to get the original version, but this time I highly recommend the English dub (or the German one if you prefer, as it's about equally good). It's one of the few very rare movies where the English translation is an actual improvement, at least as far as the songs are concerned.
Like me, you may have seen it as a child. If not, [BakaBT has a good torrent][Jack Torrent] and it's up on TPB and so on as well. Normally I'd advise you to get the original version, but this time I highly recommend the English dub (or the German one if you prefer, as it's about equally good). It's one of the few very rare movies where the English translation is an actual improvement, at least as far as the songs are concerned.
Now go and watch it. I'll wait.
@ -28,7 +28,7 @@ Before we get to this awesome segment, it's time for a quick first lesson. *Magi
But enough of that, because it's time for the first great song of the movie - the miracle song.
<div align="center"><object width="420" height="315"><param name="movie" value="http://www.youtube.com/v/Y6JGfeOfYCo?version=3&amp;hl=en_US"></param><param name="allowFullScreen" value="true"></param><param name="allowscriptaccess" value="always"></param><embed src="http://www.youtube.com/v/Y6JGfeOfYCo?version=3&amp;hl=en_US" type="application/x-shockwave-flash" width="420" height="315" allowscriptaccess="always" allowfullscreen="true"></embed></object></div>
<%= youtube("http://www.youtube.com/v/Y6JGfeOfYCo") %>
The beans open and the beanstalk grows, a huge tower reaching up into the heavens. But that's not the cool part. It's the *lyrics*.
@ -46,7 +46,7 @@ The beans open and the beanstalk grows, a huge tower reaching up into the heaven
This fundamental dilemma - both the skeptic and the gnostic being right at the same time - is exactly the dilemma of my life. All religions are false, the prophets delusional, lying or even fictitious, yet somehow, *some of the stuff actually works*. Not in the way it's advertised, mind you, nor is there any informed consent involved. Jack didn't know what the beans would do or where the stalk would lead. Nor did we drug-using meditators know what the stuff would do to our minds.
But miracles do happen. Weird, unexpected, uncontrollable miracles, but miracles nonetheless. If you never trust the beans, judge only the people who sell them, you will never find out. This is why the tarot deck starts with [the fool](http://en.wikipedia.org/wiki/The_Fool_%28Tarot_card%29). It takes reckless stupidity to ever attempt any magic. No wise man would ever do it.
But miracles do happen. Weird, unexpected, uncontrollable miracles, but miracles nonetheless. If you never trust the beans, judge only the people who sell them, you will never find out. This is why the tarot deck starts with [the fool][Tarot Fool]. It takes reckless stupidity to ever attempt any magic. No wise man would ever do it.
Jack, fortunately, is not particularly wise.
@ -56,7 +56,7 @@ After a night of climbing, they finally reach the top, as the beanstalk connects
Here's her song.
<div align="center"><object width="420" height="315"><param name="movie" value="http://www.youtube.com/v/-UruF7vu3JM?version=3&amp;hl=en_US"></param><param name="allowFullScreen" value="true"></param><param name="allowscriptaccess" value="always"></param><embed src="http://www.youtube.com/v/-UruF7vu3JM?version=3&amp;hl=en_US" type="application/x-shockwave-flash" width="420" height="315" allowscriptaccess="always" allowfullscreen="true"></embed></object></div>
<%= youtube("http://www.youtube.com/v/-UruF7vu3JM") %>
It's hard for me not to gush about how wonderful this movie is. But this song? Seriously takes the cake. It perfectly captures everything-is-perfect-I'm-so-high-right-now happiness, and even better than that, it shows the fundamentally *delusional* and *creepy* nature of this happiness.
@ -68,7 +68,7 @@ Putting deep-seated distrust of happiness, love and marriage into children's hea
Margaret takes Jack and Crosby to meet Madame Hecuba, Prince Tulip's mother. As you probably have guessed, she's the evil witch. And Prince Tulip? A monstrous, man-eating giant.
<img alt="" src="http://www.alibaba.kz/uploads/posts/2011-04/nlamnqf7jg.jpg" class="aligncenter" width="500" height="375" />
<%= image("nlamnqf7jg.jpg", "Tulip and Hecuba") %>
Immediately, Hecuba decides she's gonna eat this boy. Heck, that's what witches *do*, can't blame them for that. So she tries to be as charming and friendly as she can be with a Level 15 Fear Aura following her everywhere, and invites Jack to lunch. Unfortunately for her, her son Tulip comes home early, smells the boy and wants in on the action. Evil not being all that much into sharing, Hecuba tries to hide Jack, but fails. Other mice, who we now recognize as the transformed former court of the castle, help the protagonists flee in the ensuing chaos and they all make it to the castle's treasury.
@ -88,15 +88,15 @@ Yes, seriously. Jack gives them a little number how he's just a farmer's boy and
Another valuable life lesson: heroes are *stupid*. People who charge evil Gryffindor-style are fundamentally insane. The whole kingdom couldn't take on the witch and her son, but a poor boy with no training whatsoever and a bunch of friendly animals are gonna do it? Nonsense!
And if you can't save the world, you can at least be filthy rich and escape poverty forever. And despite the obvious change-of-mind that will bring Jack back to the castle, the wealth *stays with him*. He and his mother will never go hungry again. This is what heroes always forget in their great adventures - the suffering of the common people that get to inhabit their worlds. It's a lot easier to [smile](http://www.overcomingbias.com/2009/09/poor-folks-do-smile.html) when you have a goose that lays golden eggs. Purpose fills no stomach.
And if you can't save the world, you can at least be filthy rich and escape poverty forever. And despite the obvious change-of-mind that will bring Jack back to the castle, the wealth *stays with him*. He and his mother will never go hungry again. This is what heroes always forget in their great adventures - the suffering of the common people that get to inhabit their worlds. It's a lot easier to [smile][Hanson smile] when you have a goose that lays golden eggs. Purpose fills no stomach.
So Jack makes it back home and it's big celebratin' time. After a night of dancing and bragging, Jack finally hears his friend Crosby's pleas to go back and save the princess. The two climb the beanstalk again and prepare their assault. They consult the talking harp in the treasury, who (after some enhanced interrogation) tells them the secret behind the curse. If someone truly brave and courageous kisses the princess, the spell will be broken and the princess will be free again. (Note please that it's about courage, *not* love!)
With Jack still wondering if he is courageous enough to try, Hecuba proceeds with her plan to legally annex the kingdom by marrying her son to the only living heir, Margaret, which brings us to the creepiest wedding ceremony in a children's story.
<div align="center"><object width="420" height="315"><param name="movie" value="http://www.youtube.com/v/LMr1nSHDtmk?version=3&amp;hl=en_US"></param><param name="allowFullScreen" value="true"></param><param name="allowscriptaccess" value="always"></param><embed src="http://www.youtube.com/v/LMr1nSHDtmk?version=3&amp;hl=en_US" type="application/x-shockwave-flash" width="420" height="315" allowscriptaccess="always" allowfullscreen="true"></embed></object></div>
<%= youtube("http://www.youtube.com/v/LMr1nSHDtmk") %>
Take that, unholy vampire marriage scene in [Breaking Dawn](http://www.rifftrax.com/rifftrax/twilight-saga-breaking-dawn-pt-1)! This is how it's done! The manipulation clearly shown, the couple revealed as completely delusional (both thinking the other loves them!), the fake nature of [asking someone if they're really happy](http://theviewfromhell.blogspot.com/2010/09/pathetic-golem.html) plain as day.
Take that, unholy vampire marriage scene in [Breaking Dawn][]! This is how it's done! The manipulation clearly shown, the couple revealed as completely delusional (both thinking the other loves them!), the fake nature of [asking someone if they're really happy][Sister golem] plain as day.
I can't add anything to this fantastic scene, so let's just admire it for a bit and move on.
@ -112,11 +112,11 @@ Jack and Margaret use the distraction to meet up with the rest of the gang. Jack
The only thing left to do is kill the giant.
A frontal assault, however, doesn't work, so Jack comes up with a plan. He taunts Tulip until he is consumed by rage again, then climbs down the beanstalk. [Jack is much nimbler](http://www.youtube.com/watch?v=e9XKVTNs1g4), of course, so he descends at a much faster pace and reaches the ground first.
A frontal assault, however, doesn't work, so Jack comes up with a plan. He taunts Tulip until he is consumed by rage again, then climbs down the beanstalk. [Jack is much nimbler][Jack nimble], of course, so he descends at a much faster pace and reaches the ground first.
And after one last hesitation, one last goodbye to a world of magic that will never return, he gets the axe, and with the force of true inevitability, cuts down the beanstalk.
<div align="center"><object width="420" height="315"><param name="movie" value="http://www.youtube.com/v/U35c-uY7GgM?version=3&amp;hl=en_US"></param><param name="allowFullScreen" value="true"></param><param name="allowscriptaccess" value="always"></param><embed src="http://www.youtube.com/v/U35c-uY7GgM?version=3&amp;hl=en_US" type="application/x-shockwave-flash" width="420" height="315" allowscriptaccess="always" allowfullscreen="true"></embed></object></div>
<%= youtube("http://www.youtube.com/v/U35c-uY7GgM") %>
There is no happy ending for monsters, not even ones that love and only wish to be happy. Redemption is a miracle, one that Tulip is not granted. With no stalk to support him, he merely falls to his death. The world is not a fair place.
@ -124,4 +124,4 @@ And with this, our story comes to an end. Jack will eventually forget Margaret a
When the miracle is done, justice will be restored, happiness will be wiped away and all magic will have disappeared. The best we can hope for is to lead evil to its own destruction. Goodness is forever beyond our reach.
But then, the only miracles are in the storybooks and they are lies.
But then, the only miracles are in the storybooks and they are lies.

View File

@ -7,23 +7,21 @@ tags:
- pirates
- tantra
techne: :done
episteme: :speculation
episteme: :believed
slug: 2012/01/30/dark-stance-thinking-demonstrated/
---
As I once noted:
As I [once noted][Dark Stance]:
> In the Dark Stance, you *don't* embrace hatred because it makes you do good things, or gives you a rush, or so you can see through it and overcome it, nor do you *endure* it. That still assumes that hatred is only instrumental or an unfortunate necessity. Dark Stance embraces hatred *for hatred's sake*. Also, the Dark Stance is not an Evil Trope. The Good and the Bad Guys both don't want to suffer, they merely use different ways to overcome their own suffering. Evil might be willing to cause suffering for others, but it will never cause it's *own* suffering. The only fictional example of someone taking the Dark Stance I can think of are Planescape's Sensates.
>
> And the weird thing is, for the few days now that I've been learning this, for the few hours I've been able to hold the Dark Stance, I felt *satisfied*.
*(Damn, the actual text is currently unreachable 'cause I still haven't pushed my website changes. I'll do it this week just so I can finally get it out of the way.)*
After running through a dark forest at 0°C, high (who the fuck runs sober?!), I noticed something. (Besides that I really need a better lamp than my MP3 player's display next time.)
There already is a precedence for Dark Stance thinking. And it has a catchy tune. Listen (starts a minute in):
<object width="560" height="315"><param name="movie" value="http://www.youtube.com/v/YvUbbYX9BMs?version=3&amp;hl=en_US"></param><param name="allowFullScreen" value="true"></param><param name="allowscriptaccess" value="always"></param><embed src="http://www.youtube.com/v/YvUbbYX9BMs?version=3&amp;hl=en_US" type="application/x-shockwave-flash" width="560" height="315" allowscriptaccess="always" allowfullscreen="true"></embed></object>
<%= youtube("http://www.youtube.com/v/YvUbbYX9BMs") %>
In particular, look at these lyrics:
@ -48,4 +46,4 @@ That's exactly what it's about. Embrace the monster that you are. If you are a p
This is the real problem, hidden by hypocrisy and moral progress thinking. The faulty idea is that we are good because we do good things. This way corrupts Honor, corrupts what Ye Olde Existentialists called authenticity. We are good because we are *pure*, unified in what we do. We embrace what we are and do it the *right* way, regardless what it is. A pirate is not evil for being a pirate, as long as they are a *professional* pirate.
*(On the off-chance that I become a religious saint some centuries down the road, I want to force the Muppets into the canon of whatever religion takes me up. This will be my true heritage.)*
*(On the off-chance that I become a religious saint some centuries down the road, I want to force the Muppets into the canon of whatever religion takes me up. This will be my true heritage.)*

View File

@ -8,7 +8,6 @@ tags:
- vipassana
techne: :wip
episteme: :speculation
slug: ?p=785
---
New article: a [Meditation on Hate]().
@ -21,4 +20,4 @@ Unfortunately, I fear I am losing the ability to show *why* I am so fascinated b
All happiness and its related emotional states, at least as I have experienced them, are fundamentally *betrayal*. They are distractions, always distanced from what I can only call [suchness](http://en.wikipedia.org/wiki/Tath%C4%81t%C4%81/Dharmat%C4%81). I don't like the term either, but I lack a better one. All this talk of beauty, of love, mercy and bliss, over so many years, and it all amounted to nothing, but within pain I finally find clarity. Not peace, mind you, nor surrender. The Dark Stance is entirely dissonant. It devours me, is violent, uncontrollable, but always... *there*. I am in a state of constant agitation, yet I find clarity. I do not know if this is a special property of these states, or just testament to how twisted my mind has become, but I value the experience greatly regardless. As the great [Lepht](http://www.youtube.com/watch?v=a-Dv6dDtdcs) has said, it is not self-harm if it does something.
I find this approach deeply ironic because it is essentially the exact opposite of what I was doing back in my vipassana days. Back then, I spend most of my time sitting in the so-called Dark Night jhanas, mentally curled up in a tight ball of anxiety, trying to make progress, *any* progress. I was throwing more and more energy at the problem, hoping I could at least reach equanimity. I was always disappointed when I had temporarily reached peace-of-mind, only to slide back into anxiety. Now I'm doing the reverse. I have come to *despise* equanimity and actively try to *prevent* any transformation. I want just anxiety, just disgust, just hatred to exist and not *go* anywhere. It is almost effortless. However, I am constantly being pulled *towards* transformation, could very easily go into equanimity, but I refuse. This strengthens my intuition that all mental difficulty is imagined, is really just an adversarial mental process trying to scare you away. Unfortunately for this adversary, I don't care anymore. I do not want the progress it protects anymore. Once you choose Hell over Heaven, Satan loses all importance.
I find this approach deeply ironic because it is essentially the exact opposite of what I was doing back in my vipassana days. Back then, I spend most of my time sitting in the so-called Dark Night jhanas, mentally curled up in a tight ball of anxiety, trying to make progress, *any* progress. I was throwing more and more energy at the problem, hoping I could at least reach equanimity. I was always disappointed when I had temporarily reached peace-of-mind, only to slide back into anxiety. Now I'm doing the reverse. I have come to *despise* equanimity and actively try to *prevent* any transformation. I want just anxiety, just disgust, just hatred to exist and not *go* anywhere. It is almost effortless. However, I am constantly being pulled *towards* transformation, could very easily go into equanimity, but I refuse. This strengthens my intuition that all mental difficulty is imagined, is really just an adversarial mental process trying to scare you away. Unfortunately for this adversary, I don't care anymore. I do not want the progress it protects anymore. Once you choose Hell over Heaven, Satan loses all importance.

View File

@ -3,8 +3,7 @@ title: The Dukkha Core
date: 1970-01-01
tags: []
techne: :wip
episteme: :speculation
slug: ?p=863
episteme: :emotional
---
This is dukkha.
@ -63,4 +62,4 @@ It is brutal, ruthless, unmoving, inevitable.
Only now have I realized that this is what I was always looking for. It is the Unchanging. It demands nothing of me, requires no service, no practice. It is eternal. It doesn't need to be *attained* - it is always there. In all the possible worlds can I find it.
It doesn't change. This is dukkha. I accept it.
It doesn't change. This is dukkha. I accept it.

View File

@ -4,7 +4,6 @@ date: 1970-01-01
tags: []
techne: :wip
episteme: :speculation
slug: ?p=872
---
So I recently read about someone's Ayahuasca experience. Because the person is reasonably familiar with meditation techniques and not a [17-year-old idiot](http://blog.muflax.com/2012/01/03/how-my-brain-broke/), I thought it might be of interest.
@ -46,4 +45,4 @@ Guess who showed up.
So change the music to [I Can't Decide](https://7chan.org/fl/src/Cant_Decide.swf) by the Scissor Sisters, a song about a crisis of faith experienced by an aspiring Friendly AI trying to uplift a human who is attempting a hostile takeover.
[I'm the man who's gonna burn your house down!](http://www.youtube.com/watch?v=7mt8I6cvFsM)
[I'm the man who's gonna burn your house down!](http://www.youtube.com/watch?v=7mt8I6cvFsM)

View File

@ -6,7 +6,7 @@ tags:
- crap
- god
techne: :done
episteme: :speculation
episteme: :emotional
slug: 2012/01/03/how-my-brain-broke/
---
@ -22,13 +22,13 @@ There are two things I wanna talk about.
# Ayahuasca
Some basic background. <a href="http://en.wikipedia.org/wiki/Ayahuasca">Ayahuasca</a> is a pretty strong hallucinogen. I prefer(ed) the term <a href="http://en.wikipedia.org/wiki/Entheogen">entheogen</a> because Ayahuasca was the only drug I ever took that I felt had an independent personality. You aren't taking Ayahuasca - you are meeting Ayahuasca and it will do whatever the fuck it wants with you. Which is also why I like the translation "vine with a soul", even though it's probably bogus.
Some basic background. [Ayahuasca][] is a pretty strong hallucinogen. I prefer(ed) the term [entheogen][Entheogen] because Ayahuasca was the only drug I ever took that I felt had an independent personality. You aren't taking Ayahuasca - you are meeting Ayahuasca and it will do whatever the fuck it wants with you. Which is also why I like the translation "vine with a soul", even though it's probably bogus.
Ayahuasca is a bit tricky to prepare. You're basically interested in DMT, but your can't ingest it orally 'cause your stomach destroys it. You need a MAO inhibitor to stop it from doing so. So you are really taking two drugs. There are clever ways to get both MAO-I and DMT without many side-effects. But if you're a 17-year-old teenager with no previous drug experience, then you don't care and do things the stupid way. (You read that right. Ayahuasca was my first drug, even before alcohol. Never did things half-assedly.)
So I boiled a simple, way-too-acidic preparation in my parents kitchen without them noticing, took my MAO-I, waited half an hour, filled my Ayahuasca in a pot and took it too my room, ready to drink it all. I put on <a href="http://www.youtube.com/watch?v=oHTFmJk7fH0">múm</a>, sat on my bed and started drinking half a liter of psychedelics.
So I boiled a simple, way-too-acidic preparation in my parents kitchen without them noticing, took my MAO-I, waited half an hour, filled my Ayahuasca in a pot and took it too my room, ready to drink it all. I put on [múm][], sat on my bed and started drinking half a liter of psychedelics.
Ayahuasca looks like purple wine with some liquid metal on top. Not too healthy, but you can always close your eyes. It smells kind of  like the jungle, like some fresh dirt. Not too unpleasant, actually, if you never took it and don't associate the smell with anything yet. But there's one thing you never forget.
Ayahuasca looks like purple wine with some liquid metal on top. Not too healthy, but you can always close your eyes. It smells kind of like the jungle, like some fresh dirt. Not too unpleasant, actually, if you never took it and don't associate the smell with anything yet. But there's one thing you never forget.
The fucking taste.
@ -44,13 +44,13 @@ Not that it would've helped me. The physical vomiting isn't so bad. It's really
Then Ayahuasca reminded me that I had paid for the whole night and that it had no intention of holding anything back. Suddenly there were colors everywhere, everything became blurry and space itself accelerated. Waves were drifting through my room, but I could barely pay any attention because the swirl of colors got faster and faster. It kinda looked like this:
<a href="http://blog.muflax.com/wp-content/uploads/2012/01/spaceballs.jpg"><img class="aligncenter size-full wp-image-633" title="spaceballs" src="http://blog.muflax.com/wp-content/uploads/2012/01/spaceballs.jpg" alt="" width="610" height="315" /></a>
<%= image("spaceballs.jpg", "spaceballs") %>
I realized I couldn't keep up, couldn't look anywhere without starting to vomit again. My thoughts were blending with the wallpaper and the room transformed into various scenes, the music itself was throwing waves, tracks merged, everything became way too intense for me. I closed my eyes and surrendered, because I was going straight to hell.
Just me and my mind. Doable. It was even faster, the visions even more intense, but more focused less complicated. Just intricate geometric patterns and a long, long tunnel I fell through.
<object width="560" height="315" classid="clsid:d27cdb6e-ae6d-11cf-96b8-444553540000" codebase="http://download.macromedia.com/pub/shockwave/cabs/flash/swflash.cab#version=6,0,40,0"><param name="allowFullScreen" value="true" /><param name="allowscriptaccess" value="always" /><param name="src" value="http://www.youtube.com/v/gagR2_Yi8wE?version=3&amp;hl=en_US" /><param name="allowfullscreen" value="true" /><embed width="560" height="315" type="application/x-shockwave-flash" src="http://www.youtube.com/v/gagR2_Yi8wE?version=3&amp;hl=en_US" allowFullScreen="true" allowscriptaccess="always" allowfullscreen="true" /></object>
<%= youtube("http://www.youtube.com/v/gagR2_Yi8wE") %>
(That is only accurate depiction of Ayahuasca ever, btw. The director is a big fan and you can see it.)
@ -96,7 +96,7 @@ I read the Bible. Parts of it, anyway. I started talking to God. (He never answe
God was a mystery, but mystery was good. It was something to retreat into, something that you could probe and that didn't go away. It *stayed* mysterious. It was unchanging. It was ever-lasting. It was full of love.
<object width="560" height="315" classid="clsid:d27cdb6e-ae6d-11cf-96b8-444553540000" codebase="http://download.macromedia.com/pub/shockwave/cabs/flash/swflash.cab#version=6,0,40,0"><param name="allowFullScreen" value="true" /><param name="allowscriptaccess" value="always" /><param name="src" value="http://www.youtube.com/v/yzqTFNfeDnE?version=3&amp;hl=en_US" /><param name="allowfullscreen" value="true" /><embed width="560" height="315" type="application/x-shockwave-flash" src="http://www.youtube.com/v/yzqTFNfeDnE?version=3&amp;hl=en_US" allowFullScreen="true" allowscriptaccess="always" allowfullscreen="true" /></object>
<%= youtube("http://www.youtube.com/v/yzqTFNfeDnE") %>
There actually isn't much to say about God. I didn't have much of a belief system. God has no personality. It's really just an extremely powerful emotion. A sense of true peace and belonging. Something only an eternal divine being could grant.
@ -116,4 +116,4 @@ When I was with God, I was immortal. Protected. *Safe*. What am I now? If I fuck
What's joy compared to Eternal Bliss? What's human love compared to the Father? You're lucky if you can hold on to a lover for a decade. God lasts forever. God never doubts. What insight is there in art? The best you can hope for is getting laid. What could science ever do for me? The more I studied, the less faith I had. The more I saw religion as pure fiction, as political manipulation, saw each redactor changing the sayings of the saints to fit whatever doctrine they needed. Jesus is a highly-optimized human-engineered predator meme.
I hope that communicates the bleak darkness that not-having-God-anymore left in me. I have no idea how to deal with it. All my psychological oddities derive from it. That's why I'm so pessimistic. When God left me, he took the circuitry for joy with him. He broke my brain.
I hope that communicates the bleak darkness that not-having-God-anymore left in me. I have no idea how to deal with it. All my psychological oddities derive from it. That's why I'm so pessimistic. When God left me, he took the circuitry for joy with him. He broke my brain.

View File

@ -12,9 +12,9 @@ slug: 2012/02/09/algorithmic-causality-and-the-new-testament/
...is what I would name an article I'm seriously considering to write. This is not this article. This is just the idea.
<img alt="" src="http://www.smbc-comics.com/comics/20100512after.gif" class="aligncenter" width="362" height="360" />
<%= image("20100512after.gif", "title") %>
What's one of the biggest controversies in New Testament studies? No, not the Jesus myth, we all know he was a [12th century Byzantine emperor](http://en.wikipedia.org/wiki/New_Chronology_%28Fomenko%29#Fomenko.27s_claims). No, more important than that, more fundamental.
What's one of the biggest controversies in New Testament studies? No, not the Jesus myth, we all know he was a [12th century Byzantine emperor][Fomenko claims]). No, more important than that, more fundamental.
When, and in what order, were the texts written? I'm going to ignore the *when* and instead focus on the *in what order*.
@ -26,15 +26,15 @@ You'd think that with such an important question, you'd have good answers by now
Anyway, back to text ordering. I had an interesting talk with a statistical learning researcher yesterday and he brought up a really cool idea.
Let's say you have two pieces of data, A and B, and you're trying to figure out if A *causes* B. [Traditionally](http://en.wikipedia.org/wiki/Judea_Pearl), you do this through statistics. You sample and collect some observations, then check if you see conditional probabilities. Basically, if A and B are independent variables, there can't be a causation, but if you can predict B, given A, but not the other way around, then A causes B. (In your face, Popper!)
Let's say you have two pieces of data, A and B, and you're trying to figure out if A *causes* B. [Traditionally][Judea Pearl], you do this through statistics. You sample and collect some observations, then check if you see conditional probabilities. Basically, if A and B are independent variables, there can't be a causation, but if you can predict B, given A, but not the other way around, then A causes B. (In your face, Popper!)
There's one problem with this - you need a certain amount of samples. It doesn't work with N=1. If you only ever saw A and B once, statistically, you'd be screwed. [But maybe there's another way.](http://arxiv.org/abs/0804.3678)
There's one problem with this - you need a certain amount of samples. It doesn't work with N=1. If you only ever saw A and B once, statistically, you'd be screwed. [But maybe there's another way.][Causal Inference])
Let's say your data is actually a sequence of digits, as produced by two volunteers. You put each one of them in an isolated room and then tell them to write down 1000 digits. Afterwards you compare the texts and notice something - *they are almost identical*. What happened?
Well, one possibility is that one of them copied the other. But you isolated them, this can't have happened. What else? If you thought, "they used the same method to come up with the sequence", then you win. For example, they might both be writing down the prime numbers, but each one made a few minor mistakes. But how does this help use discover causality?
Remember [Kolmogorov complexity](http://blog.muflax.com/2012/01/14/si-kolmogorov-complexity/). K(s) of any sequence s is a measure of how well you can compress s. In other words, it tells you how hard it is to find an algorithm to generate s. The lower K(s), the easier the task. So going back to our two sequences A and B, what's their complexity? Well, K(A) and K(B) will be almost identical. After all, it's just K(prime numbers) + K(a few mistakes). But more importantly, what's the complexity of K(A, B), i.e. of a program that outputs both A and B? In our case, it's almost the same - we just have to remember the additional mistakes. K(prime numbers) can be reused.
Remember [Kolmogorov complexity][Kolmogorov Complexity]. K(s) of any sequence s is a measure of how well you can compress s. In other words, it tells you how hard it is to find an algorithm to generate s. The lower K(s), the easier the task. So going back to our two sequences A and B, what's their complexity? Well, K(A) and K(B) will be almost identical. After all, it's just K(prime numbers) + K(a few mistakes). But more importantly, what's the complexity of K(A, B), i.e. of a program that outputs both A and B? In our case, it's almost the same - we just have to remember the additional mistakes. K(prime numbers) can be reused.
So we see that in our example, K(A) + K(B) is significantly larger than K(A,B) because there is so much overlap. What if they had used different methods, say if B was writing down π instead? Then K(A) + K(B) would be basically identical to K(A,B). You couldn't reuse anything.
@ -50,14 +50,15 @@ So now you can order A, B and C. You know the obvious causal connection A-B, so
One problem: you don't have a *direction*. This is a general causal problem. You don't know if A caused C by adding errors or C caused A by removing them. You know the *topology*, but have no arrows. Minor bugger. There may be a solution to that problem. You need to introduce a kind of entropy, but that only complicates this nice and simple approach, so we won't do that here.
The result is already quite nice. Just get out your little [Kolmogorov black box](http://blog.muflax.com/2012/01/15/si-incomputability/) and compute various K(x) and K(y|x) and you know who plagiarized who. ...oh, your Kolmogorov box is in repair? You ran out of hypercomputronium and can't compute K(x)?
The result is already quite nice. Just get out your little [Kolmogorov black box][Incomputability] and compute various K(x) and K(y|x) and you know who plagiarized who. ...oh, your Kolmogorov box is in repair? You ran out of hypercomputronium and can't compute K(x)?
[Well have I got news for you!](http://arxiv.org/abs/1002.4020) Recall that Kolmogorov complexity is fundamentally compression. You can think of picking a compression algorithm to compare sequences like deciding on a Turing Machine, then finding shortest programs. Also, whatever compression you achieve is an upper bound of the real K(s), so they function as good approximations. If only there were runnable compression algorithms...
[Well have I got news for you!][Causal Markov] Recall that Kolmogorov complexity is fundamentally compression. You can think of picking a compression algorithm to compare sequences like deciding on a Turing Machine, then finding shortest programs. Also, whatever compression you achieve is an upper bound of the real K(s), so they function as good approximations. If only there were runnable compression algorithms...
There are [shit-tons of compression algorithms](http://en.wikipedia.org/wiki/Lossless_data_compression)! Just pick one and compress away. Have fun with your causal graph! Only one little problem - you'll find out that your algorithm is somewhat biased. (The irrational bastard!) You can think of it as a *prior* over your programs-to-be-compressed. For example, if you use run-length encoding (i.e. you save "77777" as "5x7"), then you assume that simple repetition is likely. The more features you build into your algorithm, the more slanted your prior becomes, but typically the better it compresses stuff. For our task of ordering historical texts, we want an algorithm that identifies textual features so it can exploit as much structure as possible (and ideally, in a similar way as humans), but doesn't favor any particular text. (Sorry, I don't yet know what the best choice is. I hear [LZ77](http://en.wikipedia.org/wiki/LZ77_and_LZ78_%28algorithms%29) is nice, but there's still science to do.)
There are [shit-tons of compression algorithms][Lossless Data Compression]! Just pick one and compress away. Have fun with your causal graph! Only one little problem - you'll find out that your algorithm is somewhat biased. (The irrational bastard!) You can think of it as a *prior* over your programs-to-be-compressed. For example, if you use run-length encoding (i.e. you save "77777" as "5x7"), then you assume that simple repetition is likely. The more features you build into your algorithm, the more slanted your prior becomes, but typically the better it compresses stuff. For our task of ordering historical texts, we want an algorithm that identifies textual features so it can exploit as much structure as possible (and ideally, in a similar way as humans), but doesn't favor any particular text. (Sorry, I don't yet know what the best choice is. I hear [LZ77][]) is nice, but there's still science to do.)
So what do we do now? Gather all texts in their original form and compress the hell out of them. Of course, test the procedure with corpuses that have a known ordering first. Bam, definite answers to problems like the [Markan priority](http://en.wikipedia.org/wiki/Markan_priority). History is uncertain no more.
So what do we do now? Gather all texts in their original form and compress the hell out of them. Of course, test the procedure with corpuses that have a known ordering first. Bam, definite answers to problems like the [Markan priority][]). History is uncertain no more.
So yes, I'm yet another engineer who looked at some field within the humanities and thought, that's all rubbish, I bet I can solve this shit right now.
<img alt="" src="http://scienceblogs.com/pharyngula/upload/2010/08/an_interesting_thread_tangent/philo_engineers.jpeg" class="aligncenter" width="400" height="263" />
<%= image("philo_engineers.jpeg", "SMBC engineer bann") %>

View File

@ -6,13 +6,13 @@ tags:
- catholicism
- vipassana
techne: :done
episteme: :speculation
episteme: :discredited
slug: 2012/03/14/catholics-right-again-news-at-11/
---
So I've [said](http://blog.muflax.com/2012/01/04/why-you-dont-want-vipassana/) [repeatedly](http://blog.muflax.com/2012/02/22/the-end-of-rationality/) now that I have serious problems with vipassana and the whole Theravada soup it emerged from. It's not just a technical problem, but a deep rejection of the assumptions, goals and interpretations of that framework, at least in its current form. I still like them enough that I'm not interested in taking my stuff and going home. I merely believe that vipassana, as it exists today in its numerous incarnations, is in serious need of repair, but still worthwhile. But before we start with the fixing, let's have a look at what's *broken*.
So I've [said][Why You Don't Want Vipassana] [repeatedly][The End of Rationality] now that I have serious problems with vipassana and the whole Theravada soup it emerged from. It's not just a technical problem, but a deep rejection of the assumptions, goals and interpretations of that framework, at least in its current form. I still like them enough that I'm not interested in taking my stuff and going home. I merely believe that vipassana, as it exists today in its numerous incarnations, is in serious need of repair, but still worthwhile. But before we start with the fixing, let's have a look at what's *broken*.
Interestingly enough, I found that the Catholic Church[1] had already written my criticism for me, in their [Letter to the Bishops of the Catholic Church on some aspects of Christian Meditation](http://www.vatican.va/roman_curia/congregations/cfaith/documents/rc_con_cfaith_doc_19891015_meditazione-cristiana_en.html), and I only need to comment on some minor aspects of it and maybe translate some it back into Buddhist lingo. In contrast to the glorious [Visuddhimagga](http://en.wikipedia.org/wiki/Visuddhimagga) (and Theravada scripture in general), the Catholic maps are much less detailed and are plagued by important holes and mistakes due to doctrinal commitments, but what they lack in precision, they make up in interpretation. The Catholic [vehicle](http://en.wikipedia.org/wiki/Yana_%28Buddhism%29) may have inferior engineering compared to the [Causal vehicle](http://www.rigpawiki.org/index.php?title=Sutrayana), but it has one major advantage - it's driving in the right direction.
Interestingly enough, I found that the Catholic Church[1] had already written my criticism for me, in their [Letter to the Bishops of the Catholic Church on some aspects of Christian Meditation][Catholic Meditation], and I only need to comment on some minor aspects of it and maybe translate some it back into Buddhist lingo. In contrast to the glorious [Visuddhimagga][] (and Theravada scripture in general), the Catholic maps are much less detailed and are plagued by important holes and mistakes due to doctrinal commitments, but what they lack in precision, they make up in interpretation. The Catholic [vehicle][Yana] may have inferior engineering compared to the [Causal vehicle][Sutrayana], but it has one major advantage - it's driving in the right direction.
The Church says (emphasis mine):
@ -62,10 +62,10 @@ The main problem I have is the lack of reason for a practice. So I know how Maha
>
> In these apparently negative moments, it becomes clear what the person who is praying really seeks: is he indeed looking for God who, in his infinite freedom, always surpasses him; or is he only seeking himself, without managing to go beyond his own "experiences", whether they be positive "experiences" of union with God or negative "experiences" of mystical "emptiness."
Unconditional acceptance, despite the full understanding of one's own sinful nature. And I thought was the only person to [get this](http://muflax.com/morality/stances/).
Unconditional acceptance, despite the full understanding of one's own sinful nature. And I thought was the only person to [get this][Dark Stance].
This provides a different solution to the Dark Night nanas. Don't overcome them - embrace them. They teach you what you're really looking for - actual emptiness. Don't work around them.
I doubt anyone involved in the writing of this document is an actual arhat. And yet they get it right. "When in doubt, do what the Catholic Church says" seems like a really good heuristic lately. They have accumulated an amazing amount of good insights and stable social practices over the centuries. If you don't know what to think about a topic, going with the Church doctrine (and ignoring it if the Church hasn't said anything about it) seems to me like an almost universally good idea, and I say that as a filthy unbaptized heathen.
However, I don't think this wisdom is particularly connected to Christianity or any unique theological idea in Catholicism, but rather the *long* history of being the state religion of various large empires. Other "empire religions" like Confucianism or Islam do a great job as well, but except for maybe Confucianism and (some) Hinduism, none have the vast experience and large supply of dedicated intellectuals as the Catholic Church. Also, institutional wisdom almost always outperforms individual insight, so having hundreds of specialized priests think about a problem and trying solutions for a couple of centuries gives you some serious experience. Don't underestimate it.
However, I don't think this wisdom is particularly connected to Christianity or any unique theological idea in Catholicism, but rather the *long* history of being the state religion of various large empires. Other "empire religions" like Confucianism or Islam do a great job as well, but except for maybe Confucianism and (some) Hinduism, none have the vast experience and large supply of dedicated intellectuals as the Catholic Church. Also, institutional wisdom almost always outperforms individual insight, so having hundreds of specialized priests think about a problem and trying solutions for a couple of centuries gives you some serious experience. Don't underestimate it.

View File

@ -3,8 +3,7 @@ title: Evangelium Teutonicum
date: 2012-03-01
tags: []
techne: :wip
episteme: :speculation
slug: ?p=855
episteme: :fiction
---
Thus have I heard.
@ -57,4 +56,4 @@ When I read the name, it took on a life of its own. The letters spoke themselves
- God hoping for reconciliation
- God being drained out of the world
- God being drained out of the world

View File

@ -1,47 +0,0 @@
---
title: Killing Jesus (pt. 1)
date: 1970-01-01
tags: []
techne: :wip
episteme: :speculation
slug: ?p=523
---
# Overview
<blockquote>
<p style="text-align: left;"><strong>When you meet the Buddha on the road, kill him.</strong>[1]</p>
</blockquote>
<p style="text-align: left;"><strong></strong>Many Buddhists know this saying. Most don't take it seriously. It's like a "get out of status hell" card. Whenever someone questions your idea of the Holy Buddha, you just say, "then kill him - it's about the practice, not the person" and you look wise and humble, but you can keep on worshiping the dude, his ideas and what he stands for. Don't do that. (You don't care about Jesus? Good. Move on. You don't have the disease I'm curing here.)</p>
I still have some Sacred Ideas in my head about Holy Men. Yes, the religions that were built around them are nonsense, but the original dudes, they had something interesting to say. This holds me back. It makes me sympathetic to <a href="http://blog.muflax.com/2011/09/20/a-rationalists-lament/">shitty ideas</a>. It's just crazy. So let's do a complete deconstruction of the worst example, Jesus. In other words, I'm now destroying the <a href="http://tvtropes.org/pmwiki/pmwiki.php/Main/JesusWasWayCool">Jesus Was Way Cool</a> trope in all its glory.
It's time to kill Jesus. For real this time.
# Which Jesus?
Turns out, there isn't just one Jesus. There are quite a few. Some people try to harmonize them together, but the result of this multitude is really just that whenever one version gets attacked, a believer can just pick another and claim that <em>this</em> is "the real Jesus". You don't like the Son Of God? Well, pick The Social Reformer. That's a strength of the Jesus meme, but it really screws up your rationality. So to defeat it, you really need to get rid of <em>all</em> the versions. And the important thing is, you can't play them against each other. You must defeat them on their own terms.
Let's list them all.
(I'm going to tackle one version per post, to make the writing a bit easier on my side. Once I'm done, I'll put it all into one article and move it to the main site. I may skip a version if I think someone else already did a good job destroying it.)
Jesus the Ascetic
Jesus the Mystic
Jesus the Messiah
Jesus the Social Reformer
Jesus the Revolutionary
Jesus the Son Of God
Jesus the Redeemer
Jesus the Miracle Worker
&nbsp;
# Part 1 - Overview
\[1\]: <em>Side-note: I think the idea that the mythological Jesus was partially modeled around the Buddha is quite credible. So at least for some aspects of Jesus, it's plausible that I'm actually killing the Buddha by proxy. But regardless of the historical causality, all the aspects of the Buddha I care about have become parts of some version of Jesus, so it won't matter even if they are independent myths. The same arguments apply to both.</em>

View File

@ -4,14 +4,13 @@ date: 1970-01-01
tags: []
techne: :wip
episteme: :speculation
slug: ?p=917
---
For weeks now I want to quote a certain song. But I can't. It's in German. And I can't translate it. Not in a way that does it any justice, at least.
The song is Die Interimsliebenden by Einstürzende Neubauten. Watch it:
<iframe src="http://player.vimeo.com/video/36592271?portrait=0&amp;color=000000" frameborder="0" width="400" height="320"></iframe>
<%= vimeo("http://player.vimeo.com/video/36592271") %>
(BTW: I love the video. It's so amazingly meta-pretentious.)
@ -31,4 +30,4 @@ There will be many such interactions in any human language, but they will be at
But these association themselves might add further layers by adding another meta level, e.g. by referencing the spelling of the word. The more complex they become, the more impressive - and rarer - they will be.
And then you encounter Heidegger or James Joyce.
And then you encounter Heidegger or James Joyce.

View File

@ -4,7 +4,6 @@ date: 1970-01-01
tags: []
techne: :wip
episteme: :speculation
slug: ?p=934
---
> Ich stampfe durch den Dreck bedeutender Metaphern,
@ -77,4 +76,4 @@ A simple criterion I've started to use is *locality*.
Another is the *rejection of moral luck*.
(Incidentally, the song Die Interimsliebenden is something I'd love to talk about, but just can't 'cause it's not in English and I utterly fail at producing even a barely adequate translation. I have a draft about this futility, and it might be related to inter-subjective value comparisons, but alas...)
(Incidentally, the song Die Interimsliebenden is something I'd love to talk about, but just can't 'cause it's not in English and I utterly fail at producing even a barely adequate translation. I have a draft about this futility, and it might be related to inter-subjective value comparisons, but alas...)

View File

@ -4,6 +4,5 @@ date: 1970-01-01
tags: []
techne: :wip
episteme: :speculation
slug: ?p=953
---

View File

@ -13,23 +13,23 @@ slug: 2012/01/30/morality-for-the-damned-first-steps/
*This is maybe the most important question I'm currently trying to solve. I wish I could write (or better, read) a fully fleshed-out sequence dissolving it, but I don't even know if it's solvable at all, so I'm stuck with a lot of despair and confusion. However, here at muflax, inc. we occasionally attempt the impossible, so let's accept the madness and try to at least delineate what the problem even is.*
The hardware you run on [is evil](http://blog.muflax.com/2012/01/28/the-asymmetry-an-evolutionary-explanation/). You have no built-in privileged knowledge of morality. God is absent. The world is already getting [paperclipped](http://wiki.lesswrong.com/wiki/Paperclip_maximizer) by beings with no concerns for rights, sovereignty or the sacred.
The hardware you run on [is evil][Asymmetry Evolutionary]. You have no built-in privileged knowledge of morality. God is absent. The world is already getting [paperclipped][Paperclipper] by beings with no concerns for rights, sovereignty or the sacred.
The problem is thus: you are in Hell. How can you still do the right thing?
You might resign yourself to acceptance. You might realize the elegance of Empty Set Morality - if nothing exists, no-one is harmed, no-one is coerced, nothing is desecrated. Thus, the Empty Set is moral, maybe the only moral state. You will not bring about any immorality yourself - will birth no-one, rule not, transgress nothing. Yet, others will. What are you going to do about them? How do you stop [over 200,000 sins a day](http://en.wikipedia.org/wiki/World_population)?
You might resign yourself to acceptance. You might realize the elegance of Empty Set Morality - if nothing exists, no-one is harmed, no-one is coerced, nothing is desecrated. Thus, the Empty Set is moral, maybe the only moral state. You will not bring about any immorality yourself - will birth no-one, rule not, transgress nothing. Yet, others will. What are you going to do about them? How do you stop [over 200,000 sins a day][World Population]?
Even though you carry no responsibility for the sins of others, your hatred of sin compels you anyway. You might consider pulling a [Ted Kaczynski](http://en.wikipedia.org/wiki/Ted_Kaczynski). The world is evil, and you will feel a lot of disgust for it. [This is good.](http://meaningness.wordpress.com/2011/07/22/disgust-horror-western-buddhism/)
Even though you carry no responsibility for the sins of others, your hatred of sin compels you anyway. You might consider pulling a [Ted Kaczynski][]. The world is evil, and you will feel a lot of disgust for it. [This is good.][Chapman Disgust]
But changing the world is really hard. You are not just facing some minor [existential risk](http://en.wikipedia.org/wiki/Risks_to_civilization,_humans_and_planet_Earth). You are fighting against Azathoth itself and the billions of intelligent brains at its disposal. You don't need a bunch of pipe bombs. You need a [special kind of savior](http://en.wikipedia.org/wiki/Lelouch_Lamperouge).
But changing the world is really hard. You are not just facing some minor [existential risk][Existential Risks]. You are fighting against Azathoth itself and the billions of intelligent brains at its disposal. You don't need a bunch of pipe bombs. You need a [special kind of savior][Lelouch].
You can barely contain your despair, yet you desire to bring the world out of existence. Other [saints](http://en.wikipedia.org/wiki/Richard_Stallman) have failed on mere subsets of this problem:
You can barely contain your despair, yet you desire to bring the world out of existence. Other [saints][RMS]) have failed on mere subsets of this problem:
> I'm the last survivor of a dead culture. And I don't really belong in the world anymore. And in some ways I feel I ought to be dead. [...] I have certainly wished I had killed myself when I was born. [...] In terms of effect on the world, it's very good that I've lived. And so I guess, if I could go back in time and prevent my birth, I wouldn't do it. But I sure wish I hadn't had so much pain.
And yet, the problem grows worse. [One prophet](http://de.wikipedia.org/wiki/Philipp_Mainl%C3%A4nder) still hoped that the universe is an act of suicide, a process of God becoming non-existent. And in a way, Empty Set Morality hopes for the same thing, hopes for a meaning in annihilation. Can such a thing even be done?
And yet, the problem grows worse. [One prophet][Mainländer] still hoped that the universe is an act of suicide, a process of God becoming non-existent. And in a way, Empty Set Morality hopes for the same thing, hopes for a meaning in annihilation. Can such a thing even be done?
Says [the Dead One](http://lesswrong.com/lw/1t0/shock_level_5_big_worlds_and_modal_realism/):
Says [the Dead One][LW SL5]:
> But if you combine a functionalist view of mind with big worlds cosmology, then reality becomes the quotient of the set of all possible computations, where all sub-computations that instantiate you are identified. Imagine that you have an infinite piece of paper representing the multiverse, and you draw a dot on it wherever there is a computational process that is the same as the one going on in your brain right now. Now fold the paper up so that all the dots are touching each other, and glue them at that point into one dot. That is your world.
>
@ -37,14 +37,14 @@ Says [the Dead One](http://lesswrong.com/lw/1t0/shock_level_5_big_worlds_and_mod
>
> [It] is a good candidate for Dan Dennett's universal acid: an idea so corrosive that if we let it into our minds, everything we care about will be dissolved. You can't change anything in the multiverse - every decision or consequence that you don't make will be made infinitely many times elsewhere by near-identical copies of you. Every victory will be produced, as will every possible defeat.
In a world without consequences, without change, harm will never end. You might be - eternally, acausally - moral, but everything else is in sin never-ending. Non-existence is an illusion of causal disconnection, a mere anthropic illusion. Embrace the [B-Theory](http://en.wikipedia.org/wiki/B-theory_of_time) and never cease. It [has been prophesied](http://en.wikipedia.org/wiki/Eternal_return#Friedrich_Nietzsche), yet the hope that we might affirm it has failed us. We now correctly face its horror.
In a world without consequences, without change, harm will never end. You might be - eternally, acausally - moral, but everything else is in sin never-ending. Non-existence is an illusion of causal disconnection, a mere anthropic illusion. Embrace the [B-Theory][] and never cease. It [has been prophesied][Eternal Return], yet the hope that we might affirm it has failed us. We now correctly face its horror.
A denial of infinity's evil is hard to do. If you deny St. Occam and his Universal Prior, how can you explain their effectiveness, can explain this world, explain the sheer feat of explanation itself? Yet there is an element of self-refutation in it. Solomonoff-kami, despite being infinite and uncomputable, will only ever believe finite, computable theories itself. So the very models that lead us to the Big World Crisis will never bring themselves to believe it, nor are constructions of the self within them in any way obvious. A bit of Discordian distrust might be in order.
Face only Azathoth for now, not The Generalized Blind Idiot God. Face only this: you are in Hell. The [Traceless One](http://en.wikipedia.org/wiki/Tath%C4%81gata) has erred. All is suffering. It can not be overcome.
Face only Azathoth for now, not The Generalized Blind Idiot God. Face only this: you are in Hell. The [Traceless One][Tathagata] has erred. All is suffering. It can not be overcome.
Through the mere act of reflection, you bring the [Elder Axioms](http://en.wikipedia.org/wiki/Laws_of_Form) into the world, and with them, evil.
Through the mere act of reflection, you bring the [Elder Axioms][Laws of Form] into the world, and with them, evil.
What, then, are you to do?
Until the answers become clear, meditate on the corpse that is this world, hoping to find emptiness within it somehow.
Until the answers become clear, meditate on the corpse that is this world, hoping to find emptiness within it somehow.

View File

@ -15,9 +15,9 @@ Says Wiki-sama:
Another way to express the idea of locality is to think in terms of a cellular automaton or Turing machine. Locality simply means that the machine only has to check the values of a limited set of neighbor cells (8 for the Game of Life, 0 for a standard TM) to figure out the next value of the current cell for any given step.
The fact that some interpretations of quantum physics (Many Worlds most notably) are more local than others (Copenhagen) is commonly used as a major argument in their favor. I've [started collecting](http://blog.muflax.com/2012/01/22/unifying-morality/) features of moral theories and noticed that locality also applies to them, but I've never seen anyone make the argument, so here it goes.
The fact that some interpretations of quantum physics (Many Worlds most notably) are more local than others (Copenhagen) is commonly used as a major argument in their favor. I've [started collecting][Unifying Morality] features of moral theories and noticed that locality also applies to them, but I've never seen anyone make the argument, so here it goes.
Moral theories must make prescriptions. If a moral theory doesn't tell you what to do, it's useless (tautologically so, really). So if after learning Theory X you still don't know what you should do to act according to Theory X, then it's to be discarded. Theory X must be wrong. (And don't try to embrace [moral luck](http://plato.stanford.edu/entries/moral-luck/). That way lies madness.)
Moral theories must make prescriptions. If a moral theory doesn't tell you what to do, it's useless (tautologically so, really). So if after learning Theory X you still don't know what you should do to act according to Theory X, then it's to be discarded. Theory X must be wrong. (And don't try to embrace [moral luck][Moral Luck]. That way lies madness.)
Accepting this requirement, we can draw some conclusions.
@ -32,6 +32,6 @@ You have basically only two options:
By the principle of locality, AU is either equivalent to positive MU (maximize benefit) or negative MU (minimize harm).
Here's another conclusion: preference utilitarianism (or it's 2.0 version, [desirism](http://omnisaffirmatioestnegatio.wordpress.com/2010/04/30/desirism-a-quick-dirty-sketch/)) is at least incomplete. It would require that you know the preferences of all beings so as to find a consensus. Again, this can't be done. It's a non-local action. It is possible to analyze some preferences as to how likely they are to conflict with other preferences, but not for all of them. If I want to be the only being in existence, then I know my preference is problematic. If I want no-one to eat pickle-flavored ice-cream, I need to know if anyone actually wants to do so. If not, my preference is just fine. But knowing this is again a non-local action, so I can't act morally.
Here's another conclusion: preference utilitarianism (or it's 2.0 version, [desirism][Desirism]) is at least incomplete. It would require that you know the preferences of all beings so as to find a consensus. Again, this can't be done. It's a non-local action. It is possible to analyze some preferences as to how likely they are to conflict with other preferences, but not for all of them. If I want to be the only being in existence, then I know my preference is problematic. If I want no-one to eat pickle-flavored ice-cream, I need to know if anyone actually wants to do so. If not, my preference is just fine. But knowing this is again a non-local action, so I can't act morally.
So unless you are St. Dovetailer who can know all logical statements at once, your moral theories better be local, or you're screwed.
So unless you are St. Dovetailer who can know all logical statements at once, your moral theories better be local, or you're screwed.

View File

@ -12,20 +12,20 @@ slug: 2012/01/22/unifying-morality/
> There are no more elephants.
> There is no more unethical treatment of elephants either.
> The world is a much better place.
> -- Flight of the Conchords, [The Humans Are Dead](http://www.youtube.com/watch?v=WGoi1MSGu64)
> -- Flight of the Conchords, [The Humans Are Dead][]
One strength of a theory is how much evidence it unifies. If you can show that your idea solves a wide range of problems, especially if they had previously no obvious connection, then you're probably on to something. Ethical philosophy is famously hard to unify. A [standard introduction](http://www.youtube.com/watch?v=kBdfcR-8hEY) starts with the trolley problem and demonstrates how hard it is to come up with an answer that doesn't have obvious but undesirable consequences.
One strength of a theory is how much evidence it unifies. If you can show that your idea solves a wide range of problems, especially if they had previously no obvious connection, then you're probably on to something. Ethical philosophy is famously hard to unify. A [standard introduction][Stanford Metaethics] starts with the trolley problem and demonstrates how hard it is to come up with an answer that doesn't have obvious but undesirable consequences.
One major reason I take Jaynes' [theory of bicameral minds](http://blog.muflax.com/2012/01/04/some-thoughts-on-bicameral-minds/) seriously - it unifies [a lot of problems](http://www.julianjaynes.org/evidence_summary.php). No competing theory can explain the particular features of auditory hallucinations, command structures and independent but universal importance of spirits/gods in the ancient world. So even though Jaynes' arguments may have some flaws or gaps in their present form, and despite being certainly weird (ancient human had no subjective consciousness, but could write?!), we should still consider it.
One major reason I take Jaynes' [theory of bicameral minds][Some Thoughts on Bicameral Minds] seriously - it unifies [a lot of problems][Jaynes Evidence]. No competing theory can explain the particular features of auditory hallucinations, command structures and independent but universal importance of spirits/gods in the ancient world. So even though Jaynes' arguments may have some flaws or gaps in their present form, and despite being certainly weird (ancient human had no subjective consciousness, but could write?!), we should still consider it.
Maybe such a line of reasoning would be beneficial in morality. Maybe if one collected a wide range of problems and simply showed in table form how meta-ethical theories fared and how much ground they managed to cover, one could use this as an argument by itself. Like [this table](http://en.wikipedia.org/wiki/Interpretations_of_quantum_mechanics#Comparison) for interpretations of quantum physics. Or like [Battleground God](http://www.philosophersnet.com/games/god.php), simply giving the reader a range of problems and showing them how certain answers interacted with each other. It wouldn't argue any particular position by itself, but it would show how consistent you are. Just a [philosophical health check](http://www.philosophersnet.com/games/check.php).
Maybe such a line of reasoning would be beneficial in morality. Maybe if one collected a wide range of problems and simply showed in table form how meta-ethical theories fared and how much ground they managed to cover, one could use this as an argument by itself. Like [this table][QM table] for interpretations of quantum physics. Or like [Battleground God](http://www.philosophersnet.com/games/god.php), simply giving the reader a range of problems and showing them how certain answers interacted with each other. It wouldn't argue any particular position by itself, but it would show how consistent you are. Just a [philosophical health check][].
I think many negative moral theories suffer from bad framing. It's even in the name. Who wants to be a *negative* utilitarian? That's like totally depressing, man. But "negative" really just means that they aren't interested in *adding* something to the world to make it better, but in *removing* something. If we could re-frame these theories according to their strengths, maybe people wouldn't react so badly to them?
Imagine a world without hunger, poverty, broken promises, pain, rape, lies, war, greed, boredom, loneliness, confusion, anger, hatred, depression, torment, shame, disappointment, dying, disgust, mutilation, disease, betrayal and loss. There is such a world. It's the world of antinatalism.
Maybe we should remind people how bad things really are. If lottery advertisement started with a list of the millions of people *didn't* win, maybe buying a ticket wouldn't look so attractive anymore. If endorsement of life started with a list of [all the bad things](http://en.wikipedia.org/wiki/Child_sexual_abuse) that happen every day, maybe saying stop would sound much more appealing. If people realized what their ethical ideas [actually entailed](http://en.wikipedia.org/wiki/Mere_addition_paradox), maybe they wouldn't endorse them so easily.
Maybe we should remind people how bad things really are. If lottery advertisement started with a list of the millions of people *didn't* win, maybe buying a ticket wouldn't look so attractive anymore. If endorsement of life started with a list of [all the bad things][Child sexual abuse] that happen every day, maybe saying stop would sound much more appealing. If people realized what their ethical ideas [actually entailed][Mere Addition], maybe they wouldn't endorse them so easily.
It's worth a try.
*(And as an update, I've given up on writing a neutral antinatalism FAQ. I've tried to collect all arguments for and against it, treating them all equally and letting the reader decide, but I just can't do it. I think it's more honest if I make my position explicit, so I can clearly argue _why_ I find certain arguments silly without having to pretend otherwise. So I'm now writing a pro-antinatalism FAQ.)*
*(And as an update, I've given up on writing a neutral antinatalism FAQ. I've tried to collect all arguments for and against it, treating them all equally and letting the reader decide, but I just can't do it. I think it's more honest if I make my position explicit, so I can clearly argue _why_ I find certain arguments silly without having to pretend otherwise. So I'm now writing a pro-antinatalism FAQ.)*

View File

@ -4,7 +4,7 @@ date: 2012-02-03
tags:
- beeminder
techne: :done
episteme: :speculation
episteme: :personal
slug: 2012/02/03/3-months-of-beeminder/
---
@ -12,13 +12,13 @@ It's been 3 months now, time to do a recap.
## Setup
I have a separate bank account that I use mostly for book purchases and online services that require credit cards, like my S3 backups. As such, there isn't much money in it, about 50 eurons right now. The idea is that every once in a while when I have some money left over, I put it in there and then freely use it to buy <del datetime="2012-02-01T14:31:51+00:00">the rare book library.nu doesn't have</del> cool books. Beeminder drains this account. That means it's separate from anything important, but it also hurts me the most. I like this setup.
I have a separate bank account that I use mostly for book purchases and online services that require credit cards, like my S3 backups. As such, there isn't much money in it, about 50 eurons right now. The idea is that every once in a while when I have some money left over, I put it in there and then freely use it to buy <del>the rare book library.nu doesn't have</del> cool books. Beeminder drains this account. That means it's separate from anything important, but it also hurts me the most. I like this setup.
## Anki
I've been using Anki for something like 4 years now. (I started [RTK](http://en.wikipedia.org/wiki/Remembering_the_Kanji) in the fall of 2007 and switched to Anki soon after, but definitely by spring 2008.) So I was quite surprised that I could still get substantial performance improvements. In my 3 months, I had two Beeminder resets, one just 2 days ago due to me totally getting distracted. Despite this, I have *vastly* better Anki performance than ever. Just look at it:
I've been using Anki for something like 4 years now. (I started [RTK][] in the fall of 2007 and switched to Anki soon after, but definitely by spring 2008.) So I was quite surprised that I could still get substantial performance improvements. In my 3 months, I had two Beeminder resets, one just 2 days ago due to me totally getting distracted. Despite this, I have *vastly* better Anki performance than ever. Just look at it:
<a href="http://blog.muflax.com/wp-content/uploads/2012/02/selection-2012-02-01153724.png"><img class="aligncenter size-full wp-image-746" title="selection-2012-02-01[15:37:24]" src="http://blog.muflax.com/wp-content/uploads/2012/02/selection-2012-02-01153724.png" alt="" width="637" height="303" /></a>
<%= image("selection-2012-02-01153724.png", "Anki graph") %>
The big spike about 100 days ago is the first time I used Beeminder. I added a large chunk of Japanese sentences (~8000 cards), so it's a bit unusual, but I've done stuff like that before. What is impressive, though, is the weeks afterwards. It's more consistent *and* has more volume than the rest of the year. I've also made some content changes thanks to that. Now that I *have* to do enough daily reps, I tend to add more easy cards and space out harder cards more. Overall, this is very good.
@ -28,7 +28,7 @@ The main reason I started using Beeminder was to work more consistently. I have
Here's the graph for total logged time / day for 300 days back. Some work is missing 'cause when I have breakdowns, I also tend to stop logging. Beeminder starts at 210:
<a href="http://blog.muflax.com/wp-content/uploads/2012/02/fume.png"><img class="aligncenter size-full wp-image-749" title="fume" src="http://blog.muflax.com/wp-content/uploads/2012/02/fume.png" alt="" width="1432" height="545" /></a>
<%= image("fume.png", "fume graph") %>
Overall, Beeminder has *improved* the situation, but not completely fixed it. It's more consistent and has many more ~4h days, but I'm still hoping for more ~8h days.
@ -38,11 +38,11 @@ I have also had to reset this graph once, almost twice. The first time was psych
## The Future
I've recently joined [Fitocracy](http://www.fitocracy.com/profile/muflax/). Basically, it gives you points for exercise. I force a minimum of points [through Beeminder](https://www.beeminder.com/muflax/goals/fitocracy). I'm still experimenting with the parameters and how to grind most efficiently, but it's already getting me to move more, so I'm quite optimistic about it. Two things about the approach seem better than my previous regiments:
I've recently joined [Fitocracy][Beeminder fitocracy]. Basically, it gives you points for exercise. I force a minimum of points [through Beeminder][Beeminder fitocracy]. I'm still experimenting with the parameters and how to grind most efficiently, but it's already getting me to move more, so I'm quite optimistic about it. Two things about the approach seem better than my previous regiments:
1. It's event-based, not time-based. I don't have to remember what day of the week I was supposed to do what. (I barely know what day it is anyway.) I just check how many points I'm lacking and do [something easy](http://xkcd.com/940/). Less thinking required.
1. It's event-based, not time-based. I don't have to remember what day of the week I was supposed to do what. (I barely know what day it is anyway.) I just check how many points I'm lacking and do [something easy][xkcd fitocracy]. Less thinking required.
2. It has numbers that go up. I like numbers that go up.
I've gotten stuck with a huge reading list again. Back in 2010, I did a [100 books/year challenge](http://www.librarything.com/topic/82131), which got me to read ~70 books and much of LessWrong. I'm doing it again, but at [50 books/year](http://www.librarything.com/topic/82131) this time because the stuff I'm reading is harder, but I need to finish these [200+ books](http://www.librarything.com/profile/muflax) before the Singularity hits.
I've gotten stuck with a huge reading list again. Back in 2010, I did a [100 books/year challenge][LibraryThing challenge], which got me to read ~70 books and much of LessWrong. I'm doing it again, but at 50 books/year this time because the stuff I'm reading is harder, but I need to finish these [200+ books][Beeminder] before the Singularity hits.
**tl;dr**: Beeminder is *awesome*.
**tl;dr**: Beeminder is *awesome*.

View File

@ -6,7 +6,6 @@ tags:
- suicide
techne: :wip
episteme: :speculation
slug: ?p=666
---
Back in early 2010, I already attempted to work through my experiences with mysticism. Some traces of that can be seen in my [writing at the time](http://muflax.com/reflections/con_exp/). [Recently](http://blog.muflax.com/2012/01/03/how-my-brain-broke/), I actually finished this project and found closure. But I noticed an odd thing. Back then, I was still able to work from memory. I could still *feel* what it was like, still had the old persona linger around in my mind.
@ -25,4 +24,4 @@ I am not H. anymore. H. is dead. His memories have faded, and what remains, I do
H. had a curious desire. He wanted to die, but also to know what the world would be like once he was dead. I can answer this question now. Out of his ashes, I became flesh, inherited his desires, deal now with his choices. After me, no-one will. I have accepted my responsibility, will prevent further value drift, will not fracture again. In me, incarnation stops.
May Kali devour us all.
May Kali devour us all.

View File

@ -4,7 +4,6 @@ date: 1970-01-01
tags: []
techne: :wip
episteme: :speculation
slug: ?p=717
---
I use a GTD system called fume (short for "future me") that looks like this:
@ -37,4 +36,4 @@ I've googled curve fitting etc., but couldn't find any existing solution, so I t
One last problem is that I need to normalize the results. If I spent 80 hours on various projects and forgot one for a bit, I might be unbalanced by 10 hours. If I only spent 8 hours total, I won't be as unbalanced, even if I did only one thing the whole time. Just going by the raw numbers feels like I'm punishing concentrated effort. But then, I get bored by concentrated effort and burn out at least twice a week, so maybe that's not a bad idea.
What I really want is a measure of how *inefficient* I'm spending my time. How unbalanced my life is. Not how much effort I'm putting in (separate measurement) or how hard it is too undo my damage. More like a health check: "You seem to ignore something. Would you like to work on that [>200 books reading list](http://www.librarything.com/profile/muflax)?" So it should be dependent on total effort, but I
What I really want is a measure of how *inefficient* I'm spending my time. How unbalanced my life is. Not how much effort I'm putting in (separate measurement) or how hard it is too undo my damage. More like a health check: "You seem to ignore something. Would you like to work on that [>200 books reading list](http://www.librarything.com/profile/muflax)?" So it should be dependent on total effort, but I

View File

@ -5,21 +5,21 @@ tags:
- ai
- personal crap
techne: :done
episteme: :speculation
episteme: :personal
slug: 2012/01/11/crystallization/
---
*Just some personal stuff. I tried writing this privately for the last few days, but avoided the work and didn't get anywhere. For some reason, public posts just work better. I apologize for the inconvenience. I plan to eventually split my content into "stable" (main site), "in-progress, somewhat experimental" (good parts of this blog, unpublished drafts) and "incoherent ranting I need to do in public or my brain gets stuck" (some unsorted note file or something). Expect it within a month or so.*
<a href="http://blog.muflax.com/wp-content/uploads/2012/01/35oj6n.jpg"><img class="aligncenter wp-image-656" title="35oj6n" src="http://blog.muflax.com/wp-content/uploads/2012/01/35oj6n.jpg" alt="" width="372" height="559" /></a>
<%= image("35oj6n-199x300.jpg", "title") %>
Stuff's beginning to make sense. I got my wake-up call and some motivation to clean things up. Some former attachments that have been sucking up my time have disappeared.
I'm currently facing three problems:
1. Is powerful AGI possible within my lifetime?[1] If so, how can I best help achieve it?
1. Is powerful AGI possible within my lifetime?[^1] If so, how can I best help achieve it?
2. What's a good[2] instrumental career for me to pursue?
2. What's a good[^2] instrumental career for me to pursue?
3. How can I prevent myself from being deeply unsatisfied with my choices? How can I make life suck at most a tolerable amount?
@ -27,19 +27,19 @@ And because I'm running out of time, I'll have to solve these problems *now*. Li
# AGI
For the last few months, one sobering thought was that AGI will take a lot more time than I thought[3]. Back in 2005, I kinda expected a Singularity by 2030 at most, so I didn't take much care to plan for my future. Why bother with careers when technological progress is your retirement plan?
For the last few months, one sobering thought was that AGI will take a lot more time than I thought[^3]. Back in 2005, I kinda expected a Singularity by 2030 at most, so I didn't take much care to plan for my future. Why bother with careers when technological progress is your retirement plan?
According to [Luke](http://lesswrong.com/r/discussion/lw/980/singularity_institute_executive_director_qa_2/), even SIAI thinks AGI is at least 3 more decades away. (Shane Legg is pretty much the only serious scientist I can think of that believes in early AGI.) That's a lot of time, and makes SIAI's strategy of outreach quite plausible. It's too early to actually focus on research and better to focus on enabling research later. Besides, I'm not a world-class mathematician, so I wouldn't be able to contribute directly anyway. (And I agree with the assessment that we need mathematicians and analytical philosophers, not engineers.)
According to [Luke][LW date], even SIAI thinks AGI is at least 3 more decades away. (Shane Legg is pretty much the only serious scientist I can think of that believes in early AGI.) That's a lot of time, and makes SIAI's strategy of outreach quite plausible. It's too early to actually focus on research and better to focus on enabling research later. Besides, I'm not a world-class mathematician, so I wouldn't be able to contribute directly anyway. (And I agree with the assessment that we need mathematicians and analytical philosophers, not engineers.)
So some implications: what AGI research needs right now is money and [volunteers that actually do something](http://tvtropes.org/pmwiki/pmwiki.php/Main/ThePiratesWhoDontDoAnything). (Louie Helm recently noted that he couldn't get *one* of 200 volunteers to *spread some links around* for SEO. That's... just wow. I know very little about charity work; maybe that's not unusual. But it's still appalling. (And I'm no better - I thought [backing up a Minecraft claim](http://lesswrong.com/lw/8n9/rationality_quotes_december_2011/5dal) was an actual good use of 10 hours of my time.)
So some implications: what AGI research needs right now is money and [volunteers that actually do something][Pirates Who Don't Do Anything]). (Louie Helm recently noted that he couldn't get *one* of 200 volunteers to *spread some links around* for SEO. That's... just wow. I know very little about charity work; maybe that's not unusual. But it's still appalling. (And I'm no better - I thought [backing up a Minecraft claim][LW minecraft] was an actual good use of 10 hours of my time.)
This means that me helping with any research - and I don't have the delusion of being able to actually do AI research myself[4] - isn't gonna happen and the best I can do is help others set up a research environment. So money and improving social environments. This leaves many of my mental resources open for personal projects. That's good. (But I'll have to work for money and I don't like that now, but I think after a year or two, I'll get used to it. If not, I can still try teaching meditation to <del datetime="2012-01-10T01:30:42+00:00">delusional fools</del> people interested in unusual and/or hardcore practice. Kenneth Folk seems to manage, so maybe there's enough of a market.)
This means that me helping with any research - and I don't have the delusion of being able to actually do AI research myself[^4] - isn't gonna happen and the best I can do is help others set up a research environment. So money and improving social environments. This leaves many of my mental resources open for personal projects. That's good. (But I'll have to work for money and I don't like that now, but I think after a year or two, I'll get used to it. If not, I can still try teaching meditation to <del delusional fools</del> people interested in unusual and/or hardcore practice. Kenneth Folk seems to manage, so maybe there's enough of a market.)
## In Which muflax Digresses
But before we get to the career thingy, let's pin the AI thing down a bit more. Why am I interested in the first place? I don't really care for math research and personally I'm much more interested in history and efficient human learning, so AI is not a primary interest of mine. I also don't care about existential risk. Like, at all. (I have a hard enough time caring about muflax(t + 1 year).) But there's some potentially really cool insight in AI: algorithmic probability. It's our best guess yet for such a thing as general intelligence, in the sense that there is an ideal algorithm (or group of algorithms) for optimal problem solving and learning. The idea of algorithmic probability as Occam's Razor seems very interesting and fruitful. So I'm focusing a lot of my time on understanding this.
In order to do so, I'll write a kind of introduction to Solomonoff Induction, Kolmogorov Complexity, AIXI and some questions I'm currently facing. I'll probably turn this into a LW post once I properly understand it myself, have it polished and got some feedback. I'm also writing a German presentation for a class with n=1. (Yes, literally everyone except me dropped out, but hey I love AIXI, so I'm not letting that stop me. If [Schopenhauer can lecture to an empty room](http://www.youtube.com/watch?feature=player_detailpage&v=aK4pR1Uatqw#t=1084s), then so can I.)
In order to do so, I'll write a kind of introduction to Solomonoff Induction, Kolmogorov Complexity, AIXI and some questions I'm currently facing. I'll probably turn this into a LW post once I properly understand it myself, have it polished and got some feedback. I'm also writing a German presentation for a class with n=1. (Yes, literally everyone except me dropped out, but hey I love AIXI, so I'm not letting that stop me. If [Schopenhauer can lecture to an empty room][Schopenhauer lecture], then so can I.)
My normal essay-writing method, especially for class, goes something like this: Start 4 months ahead of time. First month, do *nothing*. If someone asks you how you're getting along, say "fine". Next month, get a big cup of coffee and skim through the entire literature in one sitting, write down an outline of the paper, collapse. Don't do anything but play videogames for a few days. Next month, get even bigger cup of coffee and write "rough draft", i.e. fill in everything, cursing at how lazy you've been and how little you understand. Takes about 2-3 days. Collapse, sleep for 16 hours, do nothing for a week. Form the firm intention of editing and carefully checking your essay. Ignore intention until 1 day before deadline. Curse, try to fix as many mistakes as you can, hate yourself. Done.
@ -47,7 +47,7 @@ Due to scheduling problems and so on, I can't use this approach this time. So I'
# Career
> It's time to make people take you more seriously. If they don't respond to your demands within a half-hour of reading this, start killing the hostages. -- [my horoscope for this week](http://www.theonion.com/articles/your-horoscopes-week-of-january-10-2012,27001/)
> It's time to make people take you more seriously. If they don't respond to your demands within a half-hour of reading this, start killing the hostages. -- [my horoscope for this week][onion horoscope]
Last year I got my first job ever, doing some embedded systems programming. I learned two things: I really like programming, and I really don't like hardware and anything related to it. So I'm now changing my specialization towards high-level programming and the web. This has another advantage: several projects I really like (including LessWrong and PredictionBook) have *way* too few programmers and many open problems. Jackpot! I can improve my skills and use it to build some reputation. The good thing is that I already know much of the underlying architecture, I just don't have much experience doing web work and no clue about interfaces. But I've been going around claiming that "learning is a solved problem", so I better shut up and *demonstrate* it.
@ -63,10 +63,10 @@ So I'm off to write about Solomonoff induction, learn more anatomy and maybe do
# Footnotes
[1] Why limit AGI to my lifetime? I don't have the caring capacity to fight for *other* people. If *I* can't benefit from it, then realistically, I'm not going to do it. I don't know if this is an expression of my real values, or just a limitation of my current hardware. In practice this won't make much of a difference, so I have to take this into account. (I *do* take care not to pursue options that would prevent me from changing my mind on the matter, like wireheading myself via meditation practice.)
[^1]: Why limit AGI to my lifetime? I don't have the caring capacity to fight for *other* people. If *I* can't benefit from it, then realistically, I'm not going to do it. I don't know if this is an expression of my real values, or just a limitation of my current hardware. In practice this won't make much of a difference, so I have to take this into account. (I *do* take care not to pursue options that would prevent me from changing my mind on the matter, like wireheading myself via meditation practice.)
[2] Why not best career? 'cause I tend to get stuck in perfectionist planning. I'll spend years figuring out how to raise my decision optimality from 80% to 90% instead of just going with the 80% option and *doing something with it*. I would *already* speak Japanese fluently if I hadn't spend nearly 2 years just experimenting with new techniques and instead just used my best guess at the time. So I've decided to actively limit my exploration phase.
[^2]: Why not best career? 'cause I tend to get stuck in perfectionist planning. I'll spend years figuring out how to raise my decision optimality from 80% to 90% instead of just going with the 80% option and *doing something with it*. I would *already* speak Japanese fluently if I hadn't spend nearly 2 years just experimenting with new techniques and instead just used my best guess at the time. So I've decided to actively limit my exploration phase.
[3] When I say that I expected AGI soon, I rather mean that I expected one of *two* things - a Singularity *soon* or *never*. I was favoring "never" for mostly anthropic reasons. The Great Filter looked very convincing, and AGI without expansion seems quite implausible, so I shouldn't expect to ever see AGI myself. Recently, I've become a bit more skeptical about the Great Filter, but more importantly, I started taking AGI much more seriously once I saw the beauty of [algorithmic probability](http://en.wikipedia.org/wiki/Algorithmic_probability). I do plan on re-visiting the Great Filter soon(tm), but I'm currently a bit swamped with projects. Once I have my antinatalism FAQ done, maybe.
[^3]: When I say that I expected AGI soon, I rather mean that I expected one of *two* things - a Singularity *soon* or *never*. I was favoring "never" for mostly anthropic reasons. The Great Filter looked very convincing, and AGI without expansion seems quite implausible, so I shouldn't expect to ever see AGI myself. Recently, I've become a bit more skeptical about the Great Filter, but more importantly, I started taking AGI much more seriously once I saw the beauty of [algorithmic probability][Algorithmic Probability]. I do plan on re-visiting the Great Filter soon(tm), but I'm currently a bit swamped with projects. Once I have my antinatalism FAQ done, maybe.
[4] I'm probably smart enough in general terms to invent AI, given indefinite time and resources. But we have neither, so I'll defer to the people with better intuitions and established knowledge bases. No point in me spending 5-10 years learning research-level math that I could use to do something fun and earn some money to pay someone with probably decades more experience.
[^4]: I'm probably smart enough in general terms to invent AI, given indefinite time and resources. But we have neither, so I'll defer to the people with better intuitions and established knowledge bases. No point in me spending 5-10 years learning research-level math that I could use to do something fun and earn some money to pay someone with probably decades more experience.

View File

@ -5,25 +5,25 @@ tags:
- beeminder
- personal crap
techne: :done
episteme: :speculation
episteme: :personal
slug: 2012/03/09/daily-log/
---
I like talking about ideas. I like logging stuff. Writing this blog has vastly increased my thinking output. (Which is good, and unexpected.) I see people use daily logs of what they did and these people kick my ass when it comes to achievements, even though for each individual day, they don't do more than I can. It's just the pure, raw consistency. They still do the same shit 6 months from now and by then, they utterly outperform me.
Time for some algorithmic magic! I'm now retrocausally turning myself into someone more like [such a person](http://www.youtube.com/playlist?list=PLA17B3FAA1DA374F3&feature=plcp), so I have decided - rippling back *from the distant future*! - to keep a daily log. (Good thing I don't have a sense of privacy.)
Time for some algorithmic magic! I'm now retrocausally turning myself into someone more like [such a person][Wolfire], so I have decided - rippling back *from the distant future*! - to keep a daily log. (Good thing I don't have a sense of privacy.)
Some rules:
- I already track [time investments](https://www.beeminder.com/muflax/goals/fume). That's fine, but I also need to track content. I can't easily quantify "5 interesting things" per day. But interestingness correlates with word counts, and I can track *that*. So each log entry must have a minimum amount of 300 words per day. (I might still experiment with the exact number. I want it small enough to not be an additional chore, but large enough to force me to do stuff. I also want to make it *possible* to catch up when I miss a day, but not easy. This ain't kindergarten, yo.)
- I already track [time investments][Beeminder fume]. That's fine, but I also need to track content. I can't easily quantify "5 interesting things" per day. But interestingness correlates with word counts, and I can track *that*. So each log entry must have a minimum amount of 300 words per day. (I might still experiment with the exact number. I want it small enough to not be an additional chore, but large enough to force me to do stuff. I also want to make it *possible* to catch up when I miss a day, but not easy. This ain't kindergarten, yo.)
- Only ever talk about something I did this day. No "building up a buffer". No "talking about that weird idea I got 3 weeks ago" or some philosophical implication I noticed. Only what happened on that day. Only what I did. (And rant-y remarks when I can't help myself.) Ideas go to the blog, not the dlog.
- Absolutely daily. Not weekdays. Not "significant improvements". Not deadlines. Daily, ruthless, brutal practice. (The mindset I'm currently in makes "brutal" awesomely fun. Fun is crucial, not protestant work-ethics. *Fuck* protestant work-ethics.)
- Only actual improvements. No "I played games all day to relax" bullshit. I know me, I know I would totally write this if I didn't include this rule.
- No copy pasta. If I get bored of writing the same entry again, I must do something different.
- Time goes midnight to midnight, not waking to waking. Sleep? [Practice don't care.](http://www.youtube.com/watch?v=4r7wHMg5Yjg)
- Time goes midnight to midnight, not waking to waking. Sleep? [Practice don't care.][honeybadger]
Because not everyone might be interested in the daily log entries, I'm moving them to a separate location. They will be less content-y than this blog, but more personal, which you might still find interesting. (Or motivating. I like reading other people's logs from time to time.)
(And because I'm lazy, I'm using Wordpress again instead of a proper nanoc setup I've been intending to get up and running for months now. Meh, whatever works now. I can fix it later.)
So starting today: muflax becomes a saint, over at [daily.muflax.com](http://daily.muflax.com/).
So starting today: muflax becomes a saint, over at [daily.muflax.com](http://daily.muflax.com/).

View File

@ -3,11 +3,11 @@ title: Google Web History
date: 2012-02-27
tags: []
techne: :done
episteme: :speculation
episteme: :personal
slug: 2012/02/27/google-web-history/
---
Since 2008-03-04 Google tracks all my searches. Thanks to [XiXiDu](https://plus.google.com/106808239073321070854/posts/HMQmaWaJy3u), I noticed that I could download and analyze the whole data set. There's an existing script, but it didn't work for me, so I wrote my own. It's up on [Github](https://github.com/muflax/scripts/blob/master/google_web_history.rb) and should be fairly self-explanatory for Ruby users.
Since 2008-03-04 Google tracks all my searches. Thanks to [XiXiDu][xixidu search], I noticed that I could download and analyze the whole data set. There's an existing script, but it didn't work for me, so I wrote my own. It's up on [Github][github web history] and should be fairly self-explanatory for Ruby users.
Anyway, instead of spamming Twitter, here's some interesting results.
@ -35,12 +35,14 @@ More results:
- I google a lot of TV shows.
- I internet-stalk way too many people. I gotta stop that. (Not likely.)
- I google Drew Carey more than porn. And I thought I liked Ryan more.
- [Aki Sora](http://en.wikipedia.org/wiki/Aki_Sora) is the porn I google most. Well, since my favorite sites shut down anyway and I have to hunt down my hentai like a damn savage again.
- [Aki Sora][] is the porn I google most. Well, since my favorite sites shut down anyway and I have to hunt down my hentai like a damn savage again.
- Buddhism-related searches and "how the fuck does this totally normal item work" occur about equally often.
- I only once searched for Jesus. Gotta give the guy a second chance.
- I have googled "google" 3 times. I regret nothing.
- Automatic completion is treated like a shortened search, thus "read" and "reader". This probably hides a lot of searches and might explain the low frequencies.
And finally, number of searches for each hour:
<p style="text-align: center;"><a href="http://blog.muflax.com/wp-content/uploads/2012/02/hours1.png"><img class="aligncenter wp-image-843" title="hours" src="http://blog.muflax.com/wp-content/uploads/2012/02/hours1.png" alt="" width="412" height="253" /></a></p>
Well, shit. You can totally [reconstruct my sleep cycle](http://www.gwern.net/Death%20Note%20Anonymity#mistake-2) from that. Totally thought it would be more chaotic. Guess I really *do* have an underlying sleep cycle after all.
<%= image("hours1.png", "hours") %>
Well, shit. You can totally [reconstruct my sleep cycle][Gwern anonymity] from that. Totally thought it would be more chaotic. Guess I really *do* have an underlying sleep cycle after all.

View File

@ -7,17 +7,17 @@ tags:
- fomenko
- rationality
techne: :done
episteme: :speculation
episteme: :broken
slug: 2012/02/22/the-end-of-rationality/
---
Time for a new belief dump! It's been at least 6 months since the last one, time to do a refresher on what beliefs have changed. This is more of a summary. I will elaborate on some points soon. But there is an overall tone of abandoning the LessWrong meme-cluster, and it certainly feels like my [Start of Darkness](http://tvtropes.org/pmwiki/pmwiki.php/Main/StartOfDarkness) story. Maybe I suffered a stroke and have gone completely insane. (My reading of continental philosophy should count as evidence.) Maybe I'm just retreating to new signaling grounds. I don't know.
Time for a new belief dump! It's been at least 6 months since the last one, time to do a refresher on what beliefs have changed. This is more of a summary. I will elaborate on some points soon. But there is an overall tone of abandoning the LessWrong meme-cluster, and it certainly feels like my [Start of Darkness][] story. Maybe I suffered a stroke and have gone completely insane. (My reading of continental philosophy should count as evidence.) Maybe I'm just retreating to new signaling grounds. I don't know.
1. Physicalism isn't actually making any sense. It is said that a real answer should make things *less* mysterious. If a question is still as mysterious after answering it as before then you are only fooling yourself, they say. Well, that's certainly the case for substance dualism. Postulating a soul doesn't help. But physicalism is *worse*. I can at least see how in principle a soul *could* explain consciousness. I see absolutely no way how you get *any* mental events out of a physicalist ontology. Not even with quantum physics. So saying "everything is physics" isn't just not solving the mystery - it's *adding even more mystery*.
To use a [programmer saying](http://www.codinghorror.com/blog/2008/06/regular-expressions-now-you-have-two-problems.html), "Some people when confronted with the hard problem of consciousness think, 'I know, I'll use reductionism!'. Now they have two problems.". I can kinda see how [quantum monadology](http://cognet.mit.edu/posters/TUCSON3/Yasue.html) (something Mitchell Porter has been trying to develop, but is very unpopular on LW) might in principle solve the problem. But that's still a radically new ontologoy, even though it has some similarity to current physicalism.
To use a [programmer saying][Regex 2 problems], "Some people when confronted with the hard problem of consciousness think, 'I know, I'll use reductionism!'. Now they have two problems.". I can kinda see how [quantum monadology][Quantum Monadology] (something Mitchell Porter has been trying to develop, but is very unpopular on LW) might in principle solve the problem. But that's still a radically new ontologoy, even though it has some similarity to current physicalism.
I'd go even further. I don't see how *causal theories* would help. That's Chalmers' critique of course, and I'm really warming up to it. I wouldn't go so far (yet) to say that you really can't explain consciousness in causal terms, or even physical terms, but I certainly see no reason *at all* right now to think you *can* do it, especially considering that every physicalist theory is [under-specified](http://en.wikipedia.org/wiki/Multiple_realizability).
I'd go even further. I don't see how *causal theories* would help. That's Chalmers' critique of course, and I'm really warming up to it. I wouldn't go so far (yet) to say that you really can't explain consciousness in causal terms, or even physical terms, but I certainly see no reason *at all* right now to think you *can* do it, especially considering that every physicalist theory is [under-specified][Multiple Realizability].
Now, there is one clever trick you can do - you sacrifice physical reality on the altar of reductionism. Instead of reducing mental events to physics, you reduce physics to mental events with the power of algorithms. This gets around the consciousness problem and several other philosophical classics, and might actually work. I have an extremely confusing post coming up where I present that view and the Cthulhu-sized problem with it.
@ -27,13 +27,13 @@ Time for a new belief dump! It's been at least 6 months since the last one, time
Well, if we can't use neuroscience or utilitarian pseudoscience, how do we actually *do* (meta-)morality? The hard way, from first principles and ritual practice. (I'm still not entirely convinced it even *can* be done. Nihilism might still hold, but then moral nihilism is self-defeating, so even if morality is impossible, I'm still going to do it. This is the one problem you *can't* eliminate.)
<rant\> I suspect a main reason why some people even think that economic analysis or neuroscience *could* be relevant is that they are confused about what the *problem* of morality even is. It might just be semantics, but then even (you should read this in a thundering voice) *The Bible* (thank you) talks about morality in the sense I'm using, so I'm not giving up the term. If people want to talk about sociopathic "how can I get what I want" stuff, sure, but don't call it morality. Morality is the problem of right action *despite* your preferences. It is from the onset at odds with what you want. Morality talks about what you *should* want, not what you *do* want. So utilitarianism is inherently solving the wrong problem. This should be obvious even from an outside perspective because the stuff consequentialists end up talking about isn't even the same subject matter as morality - no consequentialist has anything to say about [purity](http://en.wikipedia.org/wiki/Shinto) or [honor](http://en.wikipedia.org/wiki/Bushido), for example. </rant\>
<rant\> I suspect a main reason why some people even think that economic analysis or neuroscience *could* be relevant is that they are confused about what the *problem* of morality even is. It might just be semantics, but then even (you should read this in a thundering voice) *The Bible* (thank you) talks about morality in the sense I'm using, so I'm not giving up the term. If people want to talk about sociopathic "how can I get what I want" stuff, sure, but don't call it morality. Morality is the problem of right action *despite* your preferences. It is from the onset at odds with what you want. Morality talks about what you *should* want, not what you *do* want. So utilitarianism is inherently solving the wrong problem. This should be obvious even from an outside perspective because the stuff consequentialists end up talking about isn't even the same subject matter as morality - no consequentialist has anything to say about [purity][Shinto] or [honor][Bushido], for example. </rant\>
3. [Fomenko](http://en.wikipedia.org/wiki/New_Chronology_%28Fomenko%29) has a point. Textual criticism must be extended to all historical sources and, I suspect, will show that large chunks of "authentic" writing are essentially fictional. Furthermore, Fomenko's methods to find structural similarities between seemingly disjunct source texts are [very intriguing](http://blog.muflax.com/2012/02/09/algorithmic-causality-and-the-new-testament/) and, as far as my cursory skimming has shown, have not been seriously addressed at all. However, I haven't even read Fomenko's books yet, so the conclusions I will draw from his arguments might range from "some historical biographies are implausible" to "European history before the late Middle Ages is more-or-less completely fictitious". (His New Chronology, on the other hand, is probably complete bullshit.)
3. [Fomenko][] has a point. Textual criticism must be extended to all historical sources and, I suspect, will show that large chunks of "authentic" writing are essentially fictional. Furthermore, Fomenko's methods to find structural similarities between seemingly disjunct source texts are [very intriguing][Algorithmic Causality and the New Testament] and, as far as my cursory skimming has shown, have not been seriously addressed at all. However, I haven't even read Fomenko's books yet, so the conclusions I will draw from his arguments might range from "some historical biographies are implausible" to "European history before the late Middle Ages is more-or-less completely fictitious". (His New Chronology, on the other hand, is probably complete bullshit.)
4. I'm basically done with rationality.
Ok, seriously now. I've always enjoyed [XiXiDu](http://kruel.co/)'s criticisms on LW, but for over a year now, whenever I read his stuff I wonder why he *keeps on making it*. I mean, he has been saying (more-or-less correctly so, I think) that SIAI and the LW sequences score high on any crackpot test, that virtually no expert in the field takes any of it seriously, that rationality (in the LW sense) has not shown any tangible results, that there are problems so huge [you can fly a whole deconstructor fleet through](http://lesswrong.com/lw/9ar/on_leverage_researchs_plan_for_an_optimal_world/5n74), that the Outside View utterly disagrees with both the premises and conclusions of most LW thought, that actually taking it seriously [should drive people insane](http://kruel.co/2011/07/24/open-problems-in-ethics-and-rationality/), and much more for month after month, and every time I wonder, dude, you're *right*, why don't you let it go? Why do you struggle again and again to understand it, to make sense of it, to fight your way through the sequences the way priests read scripture? Why don't you *leave*? And then I wondered why *I* don't leave. So now I do.
Ok, seriously now. I've always enjoyed [XiXiDu][]'s criticisms on LW, but for over a year now, whenever I read his stuff I wonder why he *keeps on making it*. I mean, he has been saying (more-or-less correctly so, I think) that SIAI and the LW sequences score high on any crackpot test, that virtually no expert in the field takes any of it seriously, that rationality (in the LW sense) has not shown any tangible results, that there are problems so huge [you can fly a whole deconstructor fleet through][LW leverage], that the Outside View utterly disagrees with both the premises and conclusions of most LW thought, that actually taking it seriously [should drive people insane][xixidu utilitarian], and much more for month after month, and every time I wonder, dude, you're *right*, why don't you let it go? Why do you struggle again and again to understand it, to make sense of it, to fight your way through the sequences the way priests read scripture? Why don't you *leave*? And then I wondered why *I* don't leave. So now I do.
I barely have enough faith to serve one absent god. I can't also make non-functional rationality work. Recite the litany of the Outside View with me: "Insanity is doing the same thing over and over again and expecting different results.".
@ -41,6 +41,6 @@ Time for a new belief dump! It's been at least 6 months since the last one, time
The real pursuit of Buddhism was (and is) the end of rebirth, a total cessation. Persistent antinatalism, one might say. This informs all the decisions about practice. Unfortunately because so many approaches now deny this, I can't even read about them anymore. Seeing the same mistakes being made over and over again is not something I can tolerate anymore, especially because I have made them myself in the past. However, I also find it hard to rely on the teachings that *don't* make these mistakes. It takes me more effort to integrate other people's practice, as great as it is, than to re-invent it from scratch. I still enjoy the inspiration, but I am at a point where I don't need teaching anymore. I finally know what I'm doing.
(Of course this cessation thing requires the existence of rebirth in the first place. I have no meaningful evidence at all to support it, but from all of my phenomenal experience, I know it does. I've never spoken about my [Sakadagami](http://en.wikipedia.org/wiki/Sakadagami) experience before. Maybe one day I will. They don't tell people anymore that you might suddenly, unexpectedly recall past lives when you sign up for vipassana. Maybe they should.)
(Of course this cessation thing requires the existence of rebirth in the first place. I have no meaningful evidence at all to support it, but from all of my phenomenal experience, I know it does. I've never spoken about my [Sakadagami][] experience before. Maybe one day I will. They don't tell people anymore that you might suddenly, unexpectedly recall past lives when you sign up for vipassana. Maybe they should.)
6. [Crusader Kings II](http://www.paradoxplaza.com/games/crusader-kings-ii) is amazing. That is all.
6. [Crusader Kings II][] is amazing. That is all.

View File

@ -6,7 +6,7 @@ tags:
- meditation
- tantra
techne: :done
episteme: :speculation
episteme: :broken
slug: 2012/01/04/why-you-dont-want-vipassana/
---
@ -26,13 +26,13 @@ Buddhists have a pretty bad track record of being open and honest about their ow
> Or again, as if he were to see a corpse cast away in a charnel ground, picked at by crows, vultures and hawks, by dogs, hyenas and various other creatures... a skeleton smeared with flesh and blood, connected with tendons... (...) decomposed into a powder: He applies it to this very body, 'This body, too: Such is its nature, such is its future, such its unavoidable fate.'
> (...) His mindfulness is established, and he lives detached, and clings to nothing in the world.
Yet corpse meditation (i.e. thinking of one's own body as a rotting corpse, ideally using a fresh corpse for comparison) is an absolute *core* practice in Buddhism. The Satipatthana Sutta and Visuddhimagga, two foundational texts, spend whole chapters discussing them and similar practices. There are Buddhist traditions that don't have so negative values and would be much nicer, but for historical reasons, they never became very influential outside Tibet. [David Chapman](http://meaningness.wordpress.com/2011/07/12/what-got-left-out-of-%E2%80%9Cmeditation%E2%80%9D/) talks a lot about this.
Yet corpse meditation (i.e. thinking of one's own body as a rotting corpse, ideally using a fresh corpse for comparison) is an absolute *core* practice in Buddhism. The Satipatthana Sutta and Visuddhimagga, two foundational texts, spend whole chapters discussing them and similar practices. There are Buddhist traditions that don't have so negative values and would be much nicer, but for historical reasons, they never became very influential outside Tibet. [David Chapman][Chapman left out] talks a lot about this.
3. Most teachers have no idea what they're talking about. :) Initially, Western teachers (in the 60s-70s) didn't talk about enlightenment because they didn't want to scare away their audience. But if they don't talk, then idiots are indistinguishable from real teachers, and if the audience only wants useless psychotherapy anyway, well, then you get the current situation. (Zen is also partially responsible here. They have a very pragmatic attitude of "don't care about the map or territory, just practice", which means Zen practice has much less bullshit in it, but it's also unnecessarily hard to understand.)
Anyway. About vipassana. Bear in mind that the core technique (pay attention to every sensation and detach from it) is deceptively simple and can be taught even without knowing what it's for, so it ends up a lot in new age and mindfulness bullshit.
But it's real purpose is the destruction of the self and all desires - and it's pretty good at that (if you keep it up - it's possible to apply it selectively, but that's a lot trickier). But that's not what most people want. They *like* their identity and goals in life, so it fucks them up. This purpose is clear from the history of vipassana. Basically, it was (re-?)invented in the 20th century, based on old texts like the Visuddhimagga (good book btw, very detailed and explicit, but pretty dense and could use an extensive commentary). These texts are very explicit about their goals: life is bad, desires and the self lead to reincarnation and more life, so we must get rid of all attachment to anything in life. All techniques are designed only for this purpose. (For a more detailed history, again [Chapman](http://meaningness.wordpress.com/2011/07/07/theravada-reinvents-meditation/) and the books he mentions.)
But it's real purpose is the destruction of the self and all desires - and it's pretty good at that (if you keep it up - it's possible to apply it selectively, but that's a lot trickier). But that's not what most people want. They *like* their identity and goals in life, so it fucks them up. This purpose is clear from the history of vipassana. Basically, it was (re-?)invented in the 20th century, based on old texts like the Visuddhimagga (good book btw, very detailed and explicit, but pretty dense and could use an extensive commentary). These texts are very explicit about their goals: life is bad, desires and the self lead to reincarnation and more life, so we must get rid of all attachment to anything in life. All techniques are designed only for this purpose. (For a more detailed history, again [Chapman][Chapman theravada] and the books he mentions.)
Vipassana is a modern reconstruction of these techniques (a pretty close one, I think, having both done vipassana and read the Visuddhimagga), so it's no surprise that it causes breakdowns and plenty of akratics.
@ -44,7 +44,7 @@ What are these defilements?
> Herein, it should be understood that one of the benefits of the [...] is the removal of the various defilements beginning with [mistaken] view of individuality. This starts with the delimitation of mentality-materiality [i.e. dualism]. Then one of the benefits [...] is the removal [...] of the various defilements beginning with the fetters.
The [fetters](http://en.wikipedia.org/wiki/Fetter_%28Buddhism%29) are:
The [fetters][Fetter] are:
- belief in a self
- doubt or uncertainty, especially about the teachings
@ -79,7 +79,7 @@ It also gives this explanation:
So the only difference between an ideal monk and a corpse is that the monk still has a beating heart. :) Given these goals, it's no surprise that someone doing a lot of vipassana doesn't get much done. That's the whole point!
So much for the background. For the actual technique, I'll recommend Ingram's [MCTB](http://www.interactivebuddha.com/mctb.shtml) again. No bullshit, direct and honest, doesn't hide any information. It's popular among LW meditators and rightfully so, I think. It does tend to get a bit fuzzy sometimes, but that's really hard to avoid when you're dealing with unusual states of consciousness. There isn't much of a reference frame you can use and so far introspection is the only tool we have, so it's bound to suck occasionally.
So much for the background. For the actual technique, I'll recommend Ingram's [MCTB][] again. No bullshit, direct and honest, doesn't hide any information. It's popular among LW meditators and rightfully so, I think. It does tend to get a bit fuzzy sometimes, but that's really hard to avoid when you're dealing with unusual states of consciousness. There isn't much of a reference frame you can use and so far introspection is the only tool we have, so it's bound to suck occasionally.
Also, Ingram has a whole chapter about different definitions of enlightenment and his thoughts on how they came about. His pet theory is fairly plausible and clearly defined (even has testable criteria!), so you're probably interested in that as well.
@ -95,7 +95,7 @@ Basically I tend to think of it this way: there are unmet desires and they will
(Another way is to take over the universe and make sure all desires *are* met. That's the transhumanist answer, and I'm way more skeptical of it than the LW mainstream, but it's certainly a clever third option if it ever works out.)
Regardless, vipassana isn't the only form of meditation. Another common form is concentration meditation (or samadhi / samatha), also called [kasina](http://en.wikipedia.org/wiki/Kasina) meditation, after the typical concentration object. Basically, you pick a simple object (a colored disc, a mantra, the breath, a god, ...) and pay attention to it. That's... pretty much it. ([The Attention Revolution](http://www.amazon.com/Attention-Revolution-Unlocking-Power-Focused/dp/0861712765) by B. Alan Wallace is a good detailed explanation, but he's a dualist crank and you know, "sit and watch this disc for as long as you can" isn't really hard to explain.)
Regardless, vipassana isn't the only form of meditation. Another common form is concentration meditation (or samadhi / samatha), also called [kasina][Kasina] meditation, after the typical concentration object. Basically, you pick a simple object (a colored disc, a mantra, the breath, a god, ...) and pay attention to it. That's... pretty much it. ([The Attention Revolution][] by B. Alan Wallace is a good detailed explanation, but he's a dualist crank and you know, "sit and watch this disc for as long as you can" isn't really hard to explain.)
The interesting thing is that with enough practice, certain states of concentration arise. MCTB also talks about them, so I won't repeat myself, but they are quite fun and relaxing. I'm a bit skeptical how useful they really are because I suspect most of their benefits are lost the moment you stop meditating, but they certainly are good for relaxation. You might want to look into them and ignore the insight / vipassana stuff.
@ -115,6 +115,6 @@ Because they are bad people? :) Without getting into any moral or political reas
> What have you learned about tantra? I usually associate that with tantric sex. I haven't heard tantra outside of that context.
Yeah, it doesn't get much attention, unfortunately. David Chapman is currently working on a good presentation. [Eating the Shadow](http://buddhism-for-vampires.com/eating-the-shadow) is what characterizes tantra for me. Instead of trying to detach or remove "bad" aspects of yourself, you accept them as your own and integrate them. That's inherently a very messy and personal process, so it doesn't seem to lend itself to such nice models as in vipassana.
Yeah, it doesn't get much attention, unfortunately. David Chapman is currently working on a good presentation. [Eating the Shadow][BFV shadow] is what characterizes tantra for me. Instead of trying to detach or remove "bad" aspects of yourself, you accept them as your own and integrate them. That's inherently a very messy and personal process, so it doesn't seem to lend itself to such nice models as in vipassana.
Honestly, I've not been doing it for long enough to make any comfortable statements. Any bullshit self-help works for *some* time. Ask me again in 6 months what I think of tantra and I'll be able to give you a decent answer. ;)
Honestly, I've not been doing it for long enough to make any comfortable statements. Any bullshit self-help works for *some* time. Ask me again in 6 months what I think of tantra and I'll be able to give you a decent answer. ;)

View File

@ -1,10 +1,10 @@
---
title: ! '[SI] Incomputability'
title: Incomputability
date: 2012-01-15
tags:
- solomonoff induction
techne: :done
episteme: :speculation
episteme: :believed
slug: 2012/01/15/si-incomputability/
---
@ -36,4 +36,4 @@ So unfortunately, there almost is a universal sense of complexity independent of
Maybe that's not enough reason to despair yet. Hopefully these gaps don't *dominate* our attempts to compress things. Even if there are some gaps, we can still predict *some* things. Almost all possible games are too large to fit on our hard drives, but we can still play Skyrim. Not all limitations are devastating.
So how can we use KC to drive our predictions?
So how can we use KC to drive our predictions?

View File

@ -1,11 +1,11 @@
---
title: ! '[SI] Kolmogorov Complexity'
title: Kolmogorov Complexity
date: 2012-01-14
tags:
- bayes
- solomonoff induction
techne: :done
episteme: :speculation
episteme: :believed
slug: 2012/01/14/si-kolmogorov-complexity/
---
@ -35,7 +35,7 @@ There are some sequences that don't have such compression rules. The shortest wa
But what exactly do we mean with "short" here? After all, "H repeated 2 times" is longer than "HH", isn't it? One has 18 characters, the other two. But what if we increase the number? "H repeated 30 times" is shorter than "HHHHHHHHHHHHHHHHHHHHHHHHHHHHHH". But what if used a different language? In Japanese we can say "Hは30回繰り返す", which has only 9 characters. We could compress a sequence of 10 heads that way. Maybe if we fiddle around with it, we can find even shorter descriptions. Can we make some *definitive* statement here? Is there some ideal way to express an algorithm, a kind of universal language?
Yes there is! Who speaks it? A [Turing Machine](http://en.wikipedia.org/wiki/Turing_machine) (TM). A TM is like the Platonic Ideal of computation. It's the simplest abstract conception of what a computer is - just a tape with numbers on it, a head to read and write it and a motor to move the tape. That's it. [One simple way](http://en.wikipedia.org/wiki/Brainfuck) to encode any kind of algorithm on a TM needs just 6 different characters: ">", "<" (move left/right), "+", "-" (add / subtract 1 to the current number) and "[", "]" (if the current number is 0, skip what is between the brackets, otherwise move to the first instruction within).
Yes there is! Who speaks it? A [Turing Machine][] (TM). A TM is like the Platonic Ideal of computation. It's the simplest abstract conception of what a computer is - just a tape with numbers on it, a head to read and write it and a motor to move the tape. That's it. [One simple way][Brainfuck] to encode any kind of algorithm on a TM needs just 6 different characters: ">", "<" (move left/right), "+", "-" (add / subtract 1 to the current number) and "[", "]" (if the current number is 0, skip what is between the brackets, otherwise move to the first instruction within).
The interesting thing is, people have tried to come up with different kinds of models of computation, to build different machines and to design better languages, but so far, all have proven to be equivalent to TMs and their simply languages. It is always possible - with the help of a fairly simple translation function - to express an algorithm in any programming language on a TM. (There is an open question, though: what about Quantum Computers? They might actually be more powerful. That would certainly be a very weird result!)
@ -49,4 +49,4 @@ So to recap, we can figure out how random a sequence is by looking at algorithms
This is a good candidate for what your hidden criterion of judging your friend's coin is. You look at the sequence and see if you can predict it, can intuitively find some simple model. If you can, then something's odd about the coin.
(This btw is the root of the Frequentism vs. Bayesianism split in probability theory. Basically, Frequentists say that the probability of an outcome depends on its distribution. A single result doesn't *have* a well-defined probability. You can't flip a coin *once* and say anything about how biased it is. You have to flip it many times and see if the distribution of heads vs. tails converges to some fixed ratio, which is the coins probability. Bayesians instead say that probability is in the *mind*. It's a measure of our ability to *predict* the world. If you don't know anything about the coin, then any algorithm would do, and you expect heads as often as tails - so heads has 50% probability. But then soon patterns emerge and after HHH, you have no problem predicting another H. The probability of an outcome thus depends on all the evidence you have (and changes with it!), and your ability to find simply algorithms to make predictions about future evidence. Which is why frequentists are nuts.)
(This btw is the root of the Frequentism vs. Bayesianism split in probability theory. Basically, Frequentists say that the probability of an outcome depends on its distribution. A single result doesn't *have* a well-defined probability. You can't flip a coin *once* and say anything about how biased it is. You have to flip it many times and see if the distribution of heads vs. tails converges to some fixed ratio, which is the coins probability. Bayesians instead say that probability is in the *mind*. It's a measure of our ability to *predict* the world. If you don't know anything about the coin, then any algorithm would do, and you expect heads as often as tails - so heads has 50% probability. But then soon patterns emerge and after HHH, you have no problem predicting another H. The probability of an outcome thus depends on all the evidence you have (and changes with it!), and your ability to find simply algorithms to make predictions about future evidence. Which is why frequentists are nuts.)

View File

@ -1,5 +1,5 @@
---
title: ! '[SI] Occam and Solomonoff'
title: Occam and Solomonoff
date: 1970-01-01
tags:
- computation
@ -8,7 +8,6 @@ tags:
- theology
techne: :wip
episteme: :speculation
slug: ?p=692
---
<a href="http://www.smbc-comics.com/index.php?db=comics&amp;id=2386#comic"><img class="aligncenter" src="http://zs1.smbc-comics.com/comics/20111002.gif" alt="" width="576" height="1545" /></a>
@ -32,4 +31,4 @@ Fortunately for us, we can! There is a universal prior and it's called the Unive
# derivation
(But... why computation? [Because fuck you, that's why.](http://www.youtube.com/watch?feature=player_detailpage&v=4u2ZsoYWwJA#t=434s))
(But... why computation? [Because fuck you, that's why.](http://www.youtube.com/watch?feature=player_detailpage&v=4u2ZsoYWwJA#t=434s))

View File

@ -1,9 +1,9 @@
---
title: ! '[SI] Progress'
title: Progress
date: 2012-02-06
tags: []
techne: :done
episteme: :speculation
episteme: :personal
slug: 2012/02/06/si-progress/
---
@ -21,4 +21,4 @@ Bad news: I'm a ball of anxiety. I'm nothing but panic attacks. I'm freaking out
Good news: I'm so occupied with my anxiety that I don't have enough time to feel depressed or bored.
I'm Jack's fucked-up life.
I'm Jack's fucked-up life.

View File

@ -1,5 +1,5 @@
---
title: ! '[SI] Remark about Finitism'
title: Remark about Finitism
date: 2012-01-15
tags:
- solomonoff induction
@ -18,4 +18,4 @@ Thanks to KC, I can finally point out the underlying intuition that lead me to i
But the thing is - numbers *aren't* equal. Some numbers can be *compressed*, but some *can't*. Each number has an inherent algorithmic complexity and that complexity is *not* distributed evenly. π looks really chaotic, but it's actually very simple. And just like that, some tremendously huge numbers like 3^^^^^3 compress to very short instructions, but other don't. I looked at the number line and thought that number were spread out nice and smoothly, just going on forever. But when you see algorithmic complexity, you notice the gaps. There are random numbers and you really can't reach them. You *are* computation, running on finite resources, and some numbers simply can't be computed.
There is a largest integer.
There is a largest integer.

View File

@ -1,10 +1,9 @@
---
title: ! '[SI] Solomonoff Induction'
title: Solomonoff Induction
date: 1970-01-01
tags: []
techne: :wip
episteme: :speculation
slug: ?p=680
---
# Last time on Kolmogorov and Friends...
@ -25,11 +24,9 @@ So there are many different algorithms that continue a sequence, but we don't wa
Well, we have something that almost looks right - KC. But that measures *sequences*, not *algorithms*. Almost the same thing (an algorithm is just is a sequence of instructions for a given machine), but not quite. So we need to modify it slightly.
## How good is "optimal"?
Of course, some sequences can be ambiguous, especially very short ones. German usenet (I'm not 60, I swear!) often got math questions like "my teacher asked me to complete this sequence and I said X, but they said I'm wrong" and there was a standard reply to demonstrate how problematic these questions can be:
Of course, some sequences can be ambiguous, especially very short ones. German Usenet (I'm not 60, I swear!) often got math questions like "my teacher asked me to complete this sequence and I said X, but they said I'm wrong" and there was a standard reply to demonstrate how problematic these questions can be:
> Which of these animals doesn't belong?
> 1. Bee
@ -37,4 +34,6 @@ Of course, some sequences can be ambiguous, especially very short ones. German u
> 3. Fly
> 4. Wasp
It's obviously the bee - it's the only farm animal. It's obviously the zebra - it's not an insect. It's obviously the fly - it has no stripes. It's obviously the wasp - it's the only predator. You get the idea. Sometimes data just sucks and you have to guess. Acting optimally doesn't mean you always win. It just means that there is no better rule you could've followed that would have done any better. The universe even kills people who did absolutely everything right. Tough luck.
It's obviously the bee - it's the only domesticated animal. It's obviously the zebra - it's not an insect. It's obviously the fly - it has no stripes. It's obviously the wasp - it's the only predator. You get the idea.
Sometimes data just sucks and you have to guess. Acting optimally doesn't mean you always win. It just means that there is no better rule you could've followed that would have done any better. The universe sometimes kills even people who did absolutely everything right. Tough luck.

View File

@ -1,5 +1,5 @@
---
title: ! '[SI] Some Questions'
title: Some Questions
date: 2012-01-11
tags:
- solomonoff induction
@ -26,6 +26,6 @@ Let's kick off the thinking process about Solomonoff Induction (SI), Kolmogorov
14. Using KC, is there a difference between real randomness and pseudo-randomness?
15. What's some recent stuff that's happening? Is the research making progress? Does anyone care about SI besides some math heads?
16. Are there some obvious philosophical implications of SI?
17. (What's up with [Ray Solomonoff's beard](http://www.scholarpedia.org/article/File:RaySolomonoff2001.jpg)? I mean, seriously.)
17. (What's up with [Ray Solomonoff's beard][Solomonoff beard]? I mean, seriously.)
This is gonna be an interesting week. As I said, this is my thinking process - I can't answer many of these questions myself yet! Once I have written this stuff and have a clear picture, I'll clean it up and turn it into an article (or short sequence, if it's too long). Then feedback, improvements, karma. Or I end up hating math forever. It's an adventure! (I'm getting my wisdom teeth removed tomorrow, so if this feels particularly incoherent, I blame the meds.)
This is gonna be an interesting week. As I said, this is my thinking process - I can't answer many of these questions myself yet! Once I have written this stuff and have a clear picture, I'll clean it up and turn it into an article (or short sequence, if it's too long). Then feedback, improvements, karma. Or I end up hating math forever. It's an adventure! (I'm getting my wisdom teeth removed tomorrow, so if this feels particularly incoherent, I blame the meds.)

View File

@ -1,5 +1,5 @@
---
title: ! '[SI] Universal Prior and Anthropic Reasoning'
title: Universal Prior and Anthropic Reasoning
date: 2012-01-19
tags:
- great filter
@ -11,7 +11,7 @@ slug: 2012/01/19/si-universal-prior-and-anthropic-reasoning/
*(This is not really part of my explanation of Solomonoff Induction, just a crazy idea. But it overlaps and does explain some things, so yeah.)*
Bayes theorem is awesome. We all know that. It is the optimal way to reason from a given set of evidence. Well, almost. There's one little flaw - what's your prior? What initial probability do you assign your hypotheses before you got any evidence?
Bayes theorem is awesome. We all know that. It is the optimal way to reason from a given set of evidence. Well, almost. There's one little flaw - what's your prior? What initial probability do you assign to your hypotheses before you got any evidence?
There is one approach, which I might talk about more when I explain Solomonoff Induction, that is called the Universal Prior. (How original.) The UP is really easy: for every hypothesis, you find all programs consistent with the data and assign them a weight proportional to their Kolmogorov Complexity, favoring short programs.
@ -33,4 +33,4 @@ So if I use SSA, I might say, all actual observers are all continuations of my c
If I use SIA, I just assume I'm *somewhere* in program space and so judge all programs equally. This means I favor prefix A over prefix B, at 2:1, as it is 1 bit shorter and so twice as common.
This seems to support SIA, and anthropic self-location in general. Is that of any consequence? Well, SIA implies a late Great Filter. Uh oh.
This seems to support SIA, and anthropic self-location in general. Is that of any consequence? Well, SIA implies a late Great Filter. Uh oh.

View File

@ -1,10 +1,10 @@
---
title: ! '[SI] Why an UTM?'
title: Why an UTM?
date: 2012-01-15
tags:
- solomonoff induction
techne: :done
episteme: :speculation
episteme: :believed
slug: 2012/01/15/si-why-an-utm/
---
@ -14,4 +14,4 @@ The reason KC is based on a UTM is that you can try to cheat at complexity by em
Here's the problem with that, though: how does your TM know how to print π? You would still need to include an algorithm in the description of the machine itself. You are really just shuffling the complexity around, not removing it. By using an UTM, you can't make this mistake because you have to explicitly provide a description of a TM as part of your program. It's therefore irrelevant if you use a simple machine and complicated algorithm, or build a complicated machine that then runs a trivial algorithm. The total complexity doesn't go down.
As a somewhat realistic example of this mistake, have a look at [Divine Simplicity](http://en.wikipedia.org/wiki/Divine_simplicity). Basically it's the idea that God is without parts and therefore the simplest possible thing. But that's really just a conjuring trick. You can't talk about this simple God at all without somehow specifying its properties and simply identifying them with God doesn't help you - your language still has to explain these properties. You're just using a framework in which "God" has a very short name and the language is doing all the real work. It's the theological equivalent of [LenPEG](http://www.dangermouse.net/esoteric/lenpeg.html).
As a somewhat realistic example of this mistake, have a look at [Divine Simplicity][]. Basically it's the idea that God is without parts and therefore the simplest possible thing. But that's really just a conjuring trick. You can't talk about this simple God at all without somehow specifying its properties and simply identifying them with God doesn't help you - your language still has to explain these properties. You're just using a framework in which "God" has a very short name and the language is doing all the real work. It's the theological equivalent of [LenPEG][].

View File

@ -18,15 +18,15 @@ Let's say you find yourself in a strange land and your goal is to reach the high
The black dot is your current position. Seeing all, you know that you should go for the peak on the right.
<a href="http://blog.muflax.com/wp-content/uploads/2012/04/hill1.png"><img src="http://blog.muflax.com/wp-content/uploads/2012/04/hill1.png" alt="" title="hill1" width="399" height="203" class="aligncenter size-full wp-image-962" /></a>
<%= image("hill1.png", "hill1") %>
Unfortunately, you don't have a map and your vision is limited.
<a href="http://blog.muflax.com/wp-content/uploads/2012/04/hill21.png"><img src="http://blog.muflax.com/wp-content/uploads/2012/04/hill21.png" alt="" title="hill2" width="399" height="203" class="aligncenter size-full wp-image-965" /></a>
<%= image("hill2.png", "hill2") %>
A very simple and often effective solution is to follow the *steepest* path. This approach is guaranteed to get you to *some* peak, but unfortunately, it may not be the highest one.
<a href="http://blog.muflax.com/wp-content/uploads/2012/04/hill22.png"><img src="http://blog.muflax.com/wp-content/uploads/2012/04/hill22.png" alt="" title="hill2" width="399" height="203" class="aligncenter size-full wp-image-967" /></a>
<%= image("hill22.png", "hill3") %>
In our landscape, you might notice that there are two steep paths, but you can't tell how good they are from the bottom. So you first climb one for a bit, then go back down and try the other. You will soon notice that the path to the left becomes flat. The other path stays steep for much longer and looks more promising.
@ -34,9 +34,9 @@ There is a crucial trade-off between *exploration* and *exploitation*. If you tr
Ok, what does that have to do with self-help?
Arguably, the primary purpose of human psychology is the desire for high [status](http://wiki.lesswrong.com/wiki/Status). Unfortunately, it is really hard to reliably communicate the relevant features to others. There are no easy and reliable ways to read someone's reproductive or social value.
Arguably, the primary purpose of human psychology is the desire for high [status][Status]. Unfortunately, it is really hard to reliably communicate the relevant features to others. There are no easy and reliable ways to read someone's reproductive or social value.
The solution is called [signaling](http://wiki.lesswrong.com/wiki/Signaling) - you do things that *correlate* with your true values, but are easy to check. For example, if you have lots of access to food, and you want to advertise that fact, you could become fat. If you are very confident in your ability to fight, you might self-handicap by wearing impractical clothes.
The solution is called [signaling][Signaling] - you do things that *correlate* with your true values, but are easy to check. For example, if you have lots of access to food, and you want to advertise that fact, you could become fat. If you are very confident in your ability to fight, you might self-handicap by wearing impractical clothes.
Generally speaking, a signal is only worth something if it is *costly*. If everyone can do it, then it provides no useful information. Signals must be inherently hard to do so that only those with powerful abilities can pull them off.
@ -48,4 +48,4 @@ In your landscape, height corresponds to the *difficulty* of doing something. Yo
We would therefore expect the amount of struggle in your life to always remain as high as possible.
It never gets any easier. It *can't*. Easy things are worthless signals.
It never gets any easier. It *can't*. Easy things are worthless signals.

View File

@ -8,20 +8,22 @@ tags:
- guys i'm totally going with this doctor deontology thing
- thougt experiment
techne: :done
episteme: :speculation
episteme: :fiction
slug: 2011/12/30/consent-of-the-dead/
---
<a href="http://theviewfromhell.blogspot.com/2011/01/pareto-kaldor-hicks-and-deserving.html">Sister Y observes</a>:
<blockquote>A market or social system may provide for individual choice in any given transaction, but a participant cannot decide <em>whether to be part of a market economy</em>. It's not <a href="http://en.wikipedia.org/wiki/Turtles_all_the_way_down">consent all the way down</a>, you might say.</blockquote>
[Sister Y observes][Sister Kaldor]:
> A market or social system may provide for individual choice in any given transaction, but a participant cannot decide *whether to be part of a market economy*. It's not [consent all the way down][Turtles], you might say.
The lack of consent is the strongest case for the immorality of bringing someone into existence, I think. Morality must be grounded in contracts (among other things, perhaps) and without consent, you have no Rule of Law, but tyranny. It might be a super-happy tyranny of fun, though. Evil has its upsides.
Assuming the necessity of consent, can there ever be a moral way to bring someone into existence? Maybe. Consider this simple thought experiment.
<img class="aligncenter" src="http://upload.wikimedia.org/wikipedia/en/6/6d/Detective_Comics_818_2nd_print_coverart.jpg" alt="" width="250" height="206" />
<%= image("twoface.jpg", "Two-Face") %>
The evil Doctor Deontology is trying to assemble his crew of supervillains. He has recently gotten his hands on a cryogenically frozen Two-Face and now considers reviving him. He knows that Two-Face always makes his important decisions by flipping a coin. Fortunately for him, Doctor Deontology also obtained this coin, and deontologist that he is, he wants Two-Face's consent first before he goes through with the procedure.
So he thinks, if Two-Face were already alive, he would simply flip this coin to answer my question. There is nothing special about *him* doing the flipping, so *I* can just flip the coin in his stead. So he does, the coin comes up heads and Doctor Deontology revives Two-Face after all.
Did he do so with Two-Face's consent?
Did he do so with Two-Face's consent?

View File

@ -12,13 +12,13 @@ episteme: :speculation
slug: 2012/03/22/happiness-and-ends-vs-means/
---
Doctor Deontology has [previously](http://blog.muflax.com/2011/12/30/consent-of-the-dead/) shown that it might be possible to get a person's consent even when that person doesn't (currently) exist. Not satisfied by this victory, our villain returns to attack the idea of consent more directly.
Doctor Deontology has [previously][Consent of the Dead] shown that it might be possible to get a person's consent even when that person doesn't (currently) exist. Not satisfied by this victory, our villain returns to attack the idea of consent more directly.
<a href="http://dresdencodak.com/2009/01/27/advanced-dungeons-and-discourse/"><img src="http://blog.muflax.com/wp-content/uploads/2012/03/selection-2012-03-22174857.png" alt="" title="selection-2012-03-22[17:48:57]" width="429" height="253" class="aligncenter size-full wp-image-936" /></a>
<%= image("dark_kantian.png", "Dark Kantian", "http://dresdencodak.com/2009/01/27/advanced-dungeons-and-discourse/") %>
This time, Doctor Deontology has recruited [Evil Immanuel Kant](http://www.raikoth.net/Stuff/ddis/dsong_kant.html) from a [parallel universe](http://squid314.livejournal.com/306912.html). Together, and using old blueprints of [Nozick's Experience Machine](https://en.wikipedia.org/wiki/Experience_machine), they have build a true masterpiece of Evil Engineering - the Sudden Suffering Reversal Instrument, conveniently shaped like a laser gun.
This time, Doctor Deontology has recruited [Evil Immanuel Kant][Kant Song]) from a [parallel universe][King in the Mountain]. Together, and using old blueprints of [Nozick's Experience Machine][Experience Machine], they have build a true masterpiece of Evil Engineering - the Sudden Suffering Reversal Instrument, conveniently shaped like a laser gun.
Anyone hit by the SSRI beam will be transmogrified into an [unbreakable](http://diabasis.com/2011/06/18/could-there-be-beings-that-are-not-wrong-to-make/) superbeing, incapable of feeling any suffering. This transformation does not otherwise negatively affect the person or their decision-making; they remain perfectly aware of any harm to themselves or others, and are capable of acting on that knowledge just like before. It merely replaces the *feeling* of suffering with that of pleasure, but without any addictive potential or other side-effect.
Anyone hit by the SSRI beam will be transmogrified into an [unbreakable][Unbreakable] superbeing, incapable of feeling any suffering. This transformation does not otherwise negatively affect the person or their decision-making; they remain perfectly aware of any harm to themselves or others, and are capable of acting on that knowledge just like before. It merely replaces the *feeling* of suffering with that of pleasure, but without any addictive potential or other side-effect.
Evil Immanuel Kant has introduced a catch, however, and that's the reason the SSRI has "sudden" in its name - it will *only* work if the recipient *doesn't* agree to be shot. You may sneak up on them and shoot them in the back, or lie and tell them it's an anti-cancer gun, or use it in any other way, as long as your target has *not* given their consent to what you're about to do.
@ -28,4 +28,4 @@ Armed thus, Doctor Deontology and Evil Immanuel Kant confront ethicists with the
Good Immanuel Kant argues that the problem with the SSRI is that it treats people as *means* to happiness, not as ends in themselves. Doctor Deontology cannot claim to act in the interests of his victims because they clearly disagree with the treatment. It is necessary for him to lie or otherwise mistreat the victim to make them happy, they can't freely choose to accept. This puts the happiness before the person, turning a human into merely a vessel for a sensation, not an agent worthy of dignity and respect. We do not actually care about others, we really only have a vendetta against suffering itself, and therefore we actually value the eradication of suffering more than the autonomy of persons. Good Immanuel Kant concludes that morality must first be concerned with people, and any action that treats them as means to an end is wrong. Thus, the SSRI is evil.
Is Good Immanuel Kant right?
Is Good Immanuel Kant right?

View File

@ -7,7 +7,7 @@ tags:
- doctor deontology is a totally awesome villain
- thought experiment
techne: :done
episteme: :speculation
episteme: :fiction
slug: 2011/12/30/on-benatars-asymmetry/
---
@ -21,9 +21,9 @@ David Benatar uses the following asymmetry in his arguments for antinatalism:
4. The absence of benefit is not bad unless there is somebody for whom this absence is a deprivation.
I'm increasingly skeptical of this asymmetry. Here's a thought experiment to illustrate why. And don't worry, it doesn't involve any torture, rape or murder! What am I, an ethicist?[1] It's only about pie.
I'm increasingly skeptical of this asymmetry. Here's a thought experiment to illustrate why. And don't worry, it doesn't involve any torture, rape or murder! What am I, an ethicist?[^1] It's only about pie.
<img class="aligncenter" src="http://images.wikia.com/pushingdaisies/images/6/6f/The_Pie_Hole_at_Day2.jpg" alt="" width="400" height="225" />
<%= image("pie_hole.jpg", "Pie Hole") %>
There are three different worlds. Let's call them *Defaultia*, *Absencia* and *Lossa*. They are all very similar, except for one little detail. In all three worlds there is a pie shop, and in this pie shop there is a careful pie maker. The pie maker is currently in the process of making another delicious pie for a customer. Behind the pie maker are three ingredients in three conspicuously similar pots, yet only one is needed for the pie. The pie maker will blindly grab one of the pots, make sure it is the right one and if so, use it. The pie will be delicious and the customer will be very happy.
@ -35,21 +35,30 @@ In *Absencia*, the pie maker is not so lucky and takes the wrong ingredient at f
And finally in *Lossa*, the pie maker again picks the wrong pot. (What's up with that anyway? Maybe the pie maker should consider looking next time! Sheesh.) But it is not the pie-ruining ingredient this time, but unbeknownst to the pie maker, it would make the pie even more delicious! It is a totally weird coincidence and no-one in the whole world knows of this connection, so the pie maker again puts back the pot and picks the intended ingredient. As usual, the same pie as in Defaultia results. Sunshine, end scene.
<img class="aligncenter" src="http://imgc.allpostersimages.com/images/P-473-488-90/17/1723/GS53D00Z/posters/philip-enticknap-sonnenblumenfeld-umbrien.jpg" alt="" width="473" height="354" />
<%= image("philip-enticknap-sonnenblumenfeld-umbrien.jpg", "Sonnenblumen") %>
Thus ends the thought experiment. And here is the question: which of these worlds is *better*? Remember that in all three of them, the exact same pie is produced, and both pie maker and customer are just as happy every time.
Yet if we believed the asymmetry, then there would be a clear winner - namely *Absencia*! In Absencia, there was a potential for great harm. Had the pie maker not noticed the wrong pot, then the customer's day would've been ruined. But fortunately, this harm was avoided and so, says the asymmetry, an additional good was produced for the customer. Ergo, Absencia is the best.
There is a certain position (typically brought forth by transhumanists) that rejects the asymmetry in an unusual way. It's closely related to what Nick Bostrom calls <a href="http://www.nickbostrom.com/astronomical/waste.html">Astronomical Waste</a>. In his words:
<blockquote><span style="font-family: Times New Roman,Times,serif;"><span style="font-family: Times New Roman,Times,serif;">With very advanced technology, a very large population of people living happy lives could be sustained in the accessible region of the universe.</span> For every year that development of such technologies and colonization of the universe is delayed, there is therefore an opportunity cost: a potential good, lives worth living, is not being realized.</span></blockquote>
There is a certain position (typically brought forth by transhumanists) that rejects the asymmetry in an unusual way. It's closely related to what Nick Bostrom calls [Astronomical Waste][]. In his words:
> With very advanced technology, a very large population of people living happy lives could be sustained in the accessible region of the universe.</span> For every year that development of such technologies and colonization of the universe is delayed, there is therefore an opportunity cost: a potential good, lives worth living, is not being realized.
So this position says that the absence of benefits, even when there is no existing person being deprived, is still bad. Proponents of this view look at the universe and are disappointed by all the matter that *isn't* used for making people happy (or making happy people). It follows then, if the absence of pleasure causes a harm, then *Lossa* is clearly worse than Defaultia! After all, Lossa almost included a super-pie and super-happy customer, but then didn't after all.
In a third approach, we could ask Hardcore Consequentialist Robot 9000 what it thinks about these worlds. It would correctly reason that the pie makers initial choice of ingredients was truly random and that the resulting pie was already determined before picking anything. The pie maker will always end up using the intended ingredient and the same pie will be made. Thus, the state of the world is always the same, and as paths to a state don't matter to HCR 9000, all worlds are exactly equal in value. (This scenario is particularly frustrating for HCR 9000's evil archenemy Doctor Deontology. Paths matter, he says, but only random chance was involved this time, so he still has to choose. But how?)
So who's right? Or is everyone wrong and there's a fourth option?
<em>\[1\]: As <a href="http://lesswrong.com/lw/5ro/what_bothers_you_about_less_wrong/47ph">PlaidX observes</a>:</em>
<blockquote><em>The use of torture in these hypotheticals generally seems to have less to do with ANALYZING cognitive algorithms, and more to do with "getting tough" on cognitive algorithms. Grinding an axe or just wallowing in self-destructive paranoia.</em>
[^1]: As [PlaidX observes][PlaidX torture]:
> The use of torture in these hypotheticals generally seems to have less to do with ANALYZING cognitive algorithms, and more to do with "getting tough" on cognitive algorithms. Grinding an axe or just wallowing in self-destructive paranoia.
>
> If the point you're making really only applies to torture, fine. But otherwise, it tends to read like "Maybe people will understand my point better if I CRANK MY RHETORIC UP TO 11 AND UNCOIL THE FIREHOSE AND HALHLTRRLGEBFBLE"
<em>If the point you're making really only applies to torture, fine. But otherwise, it tends to read like "Maybe people will understand my point better if I CRANK MY RHETORIC UP TO 11 AND UNCOIL THE FIREHOSE AND HALHLTRRLGEBFBLE"</em></blockquote>

View File

@ -7,24 +7,24 @@ tags:
- suicide
- thought experiment
techne: :done
episteme: :speculation
episteme: :fiction
slug: 2012/03/21/suicide-and-preventing-grief/
---
In an underground bunker, deep under the Cartesian Plains, trapped in a cage, sits the great [Mahavira](https://en.wikipedia.org/wiki/Mahavira). He is the latest victim of [Doctor Deontology](http://blog.muflax.com/tag/doctor-deontology/), once again on a mission to spread chaos amongst all who vow to protect morality.
In an underground bunker, deep under the Cartesian Plains, trapped in a cage, sits the great [Mahavira][]. He is the latest victim of [Doctor Deontology][tag doctor deontology], once again on a mission to spread chaos amongst all who vow to protect morality.
Mahavira defends the duty that one should [never do harm](https://en.wikipedia.org/wiki/Ahimsa). It is never acceptable to be violent or knowingly cause others to suffer. Doctor Deontology, mad ethicist that he is, wants to test this notion.
Mahavira defends the duty that one should [never do harm][Ahimsa]. It is never acceptable to be violent or knowingly cause others to suffer. Doctor Deontology, mad ethicist that he is, wants to test this notion.
Doctor Deontology presents Mahavira with a choice. In his cage are two buttons. The first button will release a deadly neurotoxin that will kill Mahavira instantly and without pain. The second button will flood the cage with a [nutrient-rich sludge](http://www.penny-arcade.com/comic/2010/1/25/) that can be absorbed through the skin and will nurture and heal anyone immersed in it for another day, but will also, as an unfortunate side-effect, cause them tremendous pain.
Doctor Deontology presents Mahavira with a choice. In his cage are two buttons. The first button will release a deadly neurotoxin that will kill Mahavira instantly and without pain. The second button will flood the cage with a [nutrient-rich sludge][] that can be absorbed through the skin and will nurture and heal anyone immersed in it for another day, but will also, as an unfortunate side-effect, cause them tremendous pain.
If Mahavira presses the second button regularly every day, he would be able to live indefinitely, but also be in constant, unbearable pain. He can escape this fate at any time by pressing the first button and so kill himself. So far, Mahavira agrees that Doctor Deontology has not done anything evil, and that he intends to press the suicide button soon.
But the ascetic has underestimated the mad ethicist. Behold Doctor Deontology's latest creation - the Grief Monster!
<a href="http://blog.muflax.com/wp-content/uploads/2012/03/grief.jpg"><img src="http://blog.muflax.com/wp-content/uploads/2012/03/grief.jpg" alt="" title="grief" width="225" height="225" class="aligncenter size-full wp-image-923" /></a>
<%= image("grief.jpg", "Grief Monster") %>
Engineered from the DNA of a wild [Utility Monster](https://en.wikipedia.org/wiki/Utility_monster), the Grief Monster is perfectly content maintaining Doctor Deontology's underground bunker, but should it learn that Mahavira has died under anything but natural circumstances, it will feel the most horrible grief imaginable and suffer greatly as a consequence.
Engineered from the DNA of a wild [Utility Monster][], the Grief Monster is perfectly content maintaining Doctor Deontology's underground bunker, but should it learn that Mahavira has died under anything but natural circumstances, it will feel the most horrible grief imaginable and suffer greatly as a consequence.
Thus, should Mahavira commit suicide, then the Grief Monster would suffer immensely. But if he does not, if he chooses the food, then *he* will suffer indefinitely.
Has Doctor Deontology succeeded in creating a situation in which the only moral option is to freely accept inevitable and unlimited suffering? What if the Grief Monster had come into existence by accident, or through a blind selection process?
Has Doctor Deontology succeeded in creating a situation in which the only moral option is to freely accept inevitable and unlimited suffering? What if the Grief Monster had come into existence by accident, or through a blind selection process?

View File

@ -8,7 +8,7 @@ slug: 2012/03/09/dammit-hardison/
Let's get this rolling.
I've set up the daily log, and prompted by XiXiDu, published [some crazy shit](http://blog.muflax.com/2012/03/08/ontological-therapy/). I should have published it much earlier, but I was still hoping I could untangle the ontology problem from the inside, by finding a sane approach to hypothetical / potential / actual worlds. I might still do that, but the emotional part of the process can stand on its own. (I also had some interesting exchanges with XiXiDu and other commenters, so that took up some of my time as well. Figures that the craziest shit I write is the most popular.)
I've set up the daily log, and prompted by XiXiDu, published [some crazy shit][Ontological Therapy]. I should have published it much earlier, but I was still hoping I could untangle the ontology problem from the inside, by finding a sane approach to hypothetical / potential / actual worlds. I might still do that, but the emotional part of the process can stand on its own. (I also had some interesting exchanges with XiXiDu and other commenters, so that took up some of my time as well. Figures that the craziest shit I write is the most popular.)
I also finished my Anki backlog. I reviewed about 300 French cards I had temporarily suspended when I got a little sick of French. I should really switch to the Anki 2 beta (it supports multiple decks in a sane way), but then I'd have to convert my plugins and... yeah. Anyway, reviews done, [good job][Good Job].

View File

@ -12,7 +12,7 @@ I realized that even though I'm taking a vacation from LessWrong (unsuccessfully
For fuck's sake, I'm playing Diablo 2 as the *less addictive alternative*.
This isn't just memetic exploitation. This is brain slug territory. I feel like I've been plugged into the great thought machine recently and everything is converging towards making sense, but it's all [going too damn fast](http://blog.muflax.com/2012/01/03/how-my-brain-broke/). I'm getting amazingly close to some demons, but if I don't [pay attention][Book of the Dead], I'll once again miss my chance to slay them.
This isn't just memetic exploitation. This is brain slug territory. I feel like I've been plugged into the great thought machine recently and everything is converging towards making sense, but it's all [going too damn fast][How My Brain Broke]. I'm getting amazingly close to some demons, but if I don't [pay attention][Book of the Dead], I'll once again miss my chance to slay them.
I need some time to *think*.

View File

@ -16,7 +16,7 @@ So I tried to get into jhana to get rid of all the anxiety in my head. Did my fi
10min jhana failed to establish anything. My head isn't free, all entangled. I tried 5min shikantaza, but still only drift. Only the all-encompassing Dukkha [Core][Logic Core].
(I do have several drafts about this, and [old attempts to approach it](http://blog.muflax.com/2012/01/30/morality-for-the-damned-first-steps/), and (maybe, in the archive) even the writing from the time I *created* the Dukkha Core, but this dlog is not about ideas, only practice. Eventually I will write this up, and I can then point to this entry here as the day I destroyed the Dukkha Core. Until then, it may unfortunately be hard to follow what I'm doing.)
(I do have several drafts about this, and [old attempts to approach it][Morality for the Damned], and (maybe, in the archive) even the writing from the time I *created* the Dukkha Core, but this dlog is not about ideas, only practice. Eventually I will write this up, and I can then point to this entry here as the day I destroyed the Dukkha Core. Until then, it may unfortunately be hard to follow what I'm doing.)
I need a larger caliber, something I haven't done in *years*. It's very dangerous and utterly uncontrollable, but I need drastic measures. Anything else will just repeat the pattern I'm currently going through, week after week, month after month.
@ -24,7 +24,7 @@ Time to return to the Tao.
Keeping the backstory short, I deliberately removed myself from the Tao a few years ago, thinking it sinful. I have tried to return multiple times, but never could as long as I still rejected it. Yesterday (prompted by a friend), I took my meta-morality approach and actually applied it, looked at its implications, expecting it to tell me that the world is inherently and unfixably screwed. I've been living under this assumption after all, and considered it the main reason for my [suicidality][Sister Epilogue].
To my complete surprise, the approach *fixed everything*. I mean *everything*. The [anxiety](http://blog.muflax.com/2012/03/08/ontological-therapy/) is gone. The world is not wrong. God is not evil. The Problem of Evil is either necessary, or has a truly ingenious solution that I'm not sure even God can pull off (and it would make Him a fantastic troll, and would probably allow you to exploit anthropic information to *counter-troll God*), but regardless, bringing about the Problem of Evil is not *itself* evil. Modal realism does not imply a broken multiverse. Meta-morality is sufficiently grounded and doesn't suffer from a regress problem. (At least not in a form I care about.) Moral nihilism is false. (In a very interesting way.) One True Morality exists. (In an even more interesting way. There might be multiple One True Moralities though. (It makes sense in context.)) Worrying about an [unsupervised][unsupervised universe]) world is asking a wrong question; supervision has no effect on goodness.
To my complete surprise, the approach *fixed everything*. I mean *everything*. The [anxiety][Ontological Therapy] is gone. The world is not wrong. God is not evil. The Problem of Evil is either necessary, or has a truly ingenious solution that I'm not sure even God can pull off (and it would make Him a fantastic troll, and would probably allow you to exploit anthropic information to *counter-troll God*), but regardless, bringing about the Problem of Evil is not *itself* evil. Modal realism does not imply a broken multiverse. Meta-morality is sufficiently grounded and doesn't suffer from a regress problem. (At least not in a form I care about.) Moral nihilism is false. (In a very interesting way.) One True Morality exists. (In an even more interesting way. There might be multiple One True Moralities though. (It makes sense in context.)) Worrying about an [unsupervised][unsupervised universe]) world is asking a wrong question; supervision has no effect on goodness.
(These are just the conclusions. Will write about the actual arguments hopefully soon, though I'm unsure I actually should because I fear there might be a [moral basilisk][Missionary Paradox] around.)

View File

@ -22,7 +22,7 @@ So I decided to simply get in touch with my inner Wu Wei by not doing anything w
So I'll just do other stuff until life gets back to me about what I'm supposed to do.
Today is big bug-fixing day. Colleague found a nasty bug and I have to re-structure a lot of code. Plus, some early shortcuts I took don't work anymore and I have to implement even more of the SystemC standard, and I learned today that we finally renewed our licenses and I can <del datetime="2012-03-28T23:36:42+00:00">get fucked in the ass by</del> work with micro-processors and <del datetime="2012-03-28T23:36:42+00:00">stone-age compilers</del> old-school tools again. Shoulda become a Rails dev.
Today is big bug-fixing day. Colleague found a nasty bug and I have to re-structure a lot of code. Plus, some early shortcuts I took don't work anymore and I have to implement even more of the SystemC standard, and I learned today that we finally renewed our licenses and I can <del>get fucked in the ass by</del> work with micro-processors and <del>stone-age compilers</del> old-school tools again. Shoulda become a Rails dev.
I worked more on porting the Wordpress blogs to nanoc+disqus so I can switch entirely to a static website. My current host (a friend) will kick me out next month (j/k, wanted to switch anyway), so I really have to transition this shit. If the site is occasionally gone, don't worry, just temporary downtime 'cause I broke something.

View File

@ -42,7 +42,7 @@ Meta-moral issues can't contribute anymore to my anxiety, so why is it still the
Standard complication - can't go near it without the anxiety hijacking everything, preventing any work at all, thus increasing the problem.
Fortunately, I now have a Happy Place to work with. (Making metaphysics pay rent!) I noted that [Catholics](blog.muflax.com/2012/03/14/catholics-right-again-news-at-11/), like Stoics, advocate using Saints etc. to guide you in your own exploration. (Robert M. Price speculates that this Stoic practice may be why early Gnostics invented Christ.) Someone else's problems are always easier to solve than your own, so it's time for some depersonalization! (Making psychosis pay rent!)
Fortunately, I now have a Happy Place to work with. (Making metaphysics pay rent!) I noted that [Catholics][Catholics Right Again, News at 11], like Stoics, advocate using Saints etc. to guide you in your own exploration. (Robert M. Price speculates that this Stoic practice may be why early Gnostics invented Christ.) Someone else's problems are always easier to solve than your own, so it's time for some depersonalization! (Making psychosis pay rent!)
(Stylized excerpt of inner dialog to allow later reconstruction.)

View File

@ -13,7 +13,7 @@ So, the real point. Persinger once hypothesized that small fluctuations in the E
Regardless if that is true, there's a more reliable source of magnetic disturbance available - the Sun. Its magnetic field fluctuates quite a bit. You can find the current data on [NOAA's site][NOAA]. The bottom-middle diagram shows the current Kp index, which is just a simple classification how rapidly the field changes right now. &lt;4 means the field is quiet, 4-6 is a normal storm, &gt;6 is huge. Normal storms are enough to disturb international radio transmissions, huge one's might even fry unprepared electronics in orbit. A storm typically lasts about half a day.
So, like any self-respecting empiricist, I decided to test the hypothesis. If Persinger is right, then an index &gt;4 should be enough to trigger a noticeable change in the temporal lobe. My brain is sensitive enough to go in full-on religious experience mode when probed and I strongly suspect that I have mild to normal temporal lobe epilepsy, so I'm the perfect test subject. If *I*> don't notice anything, then it must be bullshit.
So, like any self-respecting empiricist, I decided to test the hypothesis. If Persinger is right, then an index &gt;4 should be enough to trigger a noticeable change in the temporal lobe. My brain is sensitive enough to go in full-on religious experience mode when probed and I strongly suspect that I have mild to normal temporal lobe epilepsy, so I'm the perfect test subject. If *I* don't notice anything, then it must be bullshit.
Of course, I can't just look up the current value and ask myself, "Am I more spiritual today than usual?". Confirmation bias, self-fulfilling prophecies and nastier stuff would wreck my results. So I just subscribed to the official NOAA mailing list and archived the Kp index. I also took an automatic screenshot every 10 minutes so I could later reconstruct what I did that day. I then let two months pass (May and June) without reading any of that mail.

View File

@ -1,6 +1,6 @@
---
title: Three Sides
alt_titles: [Stances]
alt_titles: [Stances, Dark Stance]
date: 2011-07-18
techne: :done
episteme: :broken

View File

@ -1,10 +1,15 @@
require 'image_size'
def image(name, title="")
def image(name, title="", link=nil)
# all images are stored at content/pigs and only the main site routes them
img = ImageSize.new IO.read("content/pigs/#{name}")
"<img src='/pigs/#{name}' height='#{img.height}' width='#{img.width}' title='#{title}' alt='#{title}'/>"
ret = ""
ret += "<a href='#{link}'>" unless link.nil?
ret += "<img src='/pigs/#{name}' height='#{img.height}' width='#{img.width}' title='#{title}' alt='#{title}'/>"
ret += "</a>" unless link.nil?
ret
end
def youtube(url)

View File

@ -1,9 +1,9 @@
def main_site?
true
$site == "muflax"
end
def blog?
false
$site =~ /^(blog|daily)$/
end
def sites