Tuesday, 8 November 2016

"Be Yourself" Is Crap Advice

A lot of self-esteem culture my generation faced felt like cognitive-behavioural therapy doled out by adults who had no training in how to sensitively administer a therapy program to anyone in particular, let alone children. Therefore, it sucked and was at best disconcerting and confusing. At least for me. Anyway, telling kids to be themselves is a substanceless and noisy truism which shames kids for engaging in what seems to me a well-adapted, strategic, unconscious, evolved behaviour: trying to fit in. And for the kids who find it difficult to fit in, "just be yourself" is transparent circular reasoning. The real answer is adults don't know have well-tailored heuristics for fitting in, uniquely matching the specialness of one single person, let alone a child. Be honest with children. When the disciplinary action of a teacher, parent, or other authority figure fails to cease signaling games among children, it's better for the adult to answer a child's question of "how do I become less awkward?" with, "I'm not sure. Tell me more.". From there, listen to the child's messy reality and see it through their eyes, and figure out a plan together.

I've been told to be myself dozens of time, as if beforehand I was the most awkward kid in my class only because I was wearing the skinsuit made from another child and was pretending to be someone else. "Be yourself" is the hollow turd of truisms.

Sunday, 23 October 2016

Any Good Arguments For A Particular God?

I was watching this video from Crash Course Philosophy today.


After watching it, I realized these theodices and other solutions to problems posed by the existence of a god are sufficient to convince me there could be a creator of the universe. By that, I mean philosophy alone is sufficient to convince me I cannot logically convince myself there definitely isn't a force or agent external to the universe which created the universe. Of course, I take possibilities of the simulation argument seriously as well. I don't want to call such a creator a 'god', because gods always have anthropic interests: Earth, humanity, and our fate always figure in the concerns of gods. Unless we have special reason to believe such a being cares about anything in particular in a simulation rather than just running a simulation in general, that simulator isn't anything like a god. I don't want to call it a god for pragmatic reasons; I'm afraid, for example, if we somehow discovered we really are living in a simulation and deigned to call it a god, theists of all stripes would come out of the woodwork to claim and rationalize the simulator as proof of their god.

Of course, that I don't think I can reject the existence of all and any gods on principle alone doesn't mean I don't reject the existence of any god in practice. Science and history are sufficient to convince me no particular god of any religion exists, or any religion is true, where philosophy alone fails. I was thinking of something, though: are there good arguments for the existence of any particular god that rule out the existence of all other gods (or particular interpretations of any one god)? For example, theodices resolving the Problem of Evil seem to me they could apply to not just the Christian interpretation of the one God ("Jehovah"), but to the Jewish or Islamic interpretations as well ("YHWH", and "Allah", respectively). Are there are any arguments you've heard, that you couldn't knock down, that favour the existence not just of some god, any god, but a particular god?

Tuesday, 27 September 2016

Causes For Raising the Rationality Waterline?

Summary: It appears 'rationality', more than just a narrow cause, could be a focus area as diverse and broad as others like 'global poverty' or 'global catastrophic risks'. I discuss how it should be possible to better evaluate the impact of these efforts the way Givewell and other parties in the effective altruism movement assess interventions in other causes. Note I draw heavily on effective altruism for examples because of my familiarity with them, but 'increasing rationality' as a cause working for all kinds of motivations and movements. Finally, I broadly summarize ongoing and potential efforts to improve thinking in society which fit into this cause.

Effective altruism (EA) has introduced lots of people to the 'important/neglected/tractable' framework (INT framework) for evaluating causes, a heuristic for relatively quickly ruling in or out whole cause areas for doing good. Typically, this framework will be applied in a 'shallow review' of a whole cause before a 'deep dive' is done to learn a whole lot more about it, who the actors are, and who's doing the best work. This is necessary for EA because resources like person-hours available for analyzing causes are limited. We can't do deep dives to investigate each and every cause before we act, because that'd take so long we'd never act at all.

Anyway, I sense this INT framework could be useful in domains outside effective altruism as well. In particular, the EA-adjacent rationality community has become sizable enough it's not just raising the sanity waterline for individuals, like people reading LessWrong alone, or even groups, like whatever lifestyle experiments happen in the Bay Area and other rationalist enclaves, but just in general. We see this with the Center For Applied Rationality (CFAR).

EA doesn't evaluate merely whole organizations, but individual projects being pursued by those organizations. For example, one of Givewell's top recommendations is the Deworm the World Initiative (DtWI), which is not its own organization, but rather a single program run by Evidence Action, which runs other programs. These other programs aren't among Givewell's top recommendations. This isn't to say they shouldn't happen, but they're not as impactful and/or neglected as DtWI, and the marginal dollars of effective altruists and other donors are better suited to DtWI.

We see this with current efforts at CFAR as well. The Open Philanthropy Project (Open Phil) recently made a grant to CFAR specifically for their SPARC program, rather than for their general operating funds (Edit: since the original draft of this post, Open Phil has made an additional grant to CFAR for general operations). It could be said CFAR is currently running three different programs*:

  • SPARC, an in-depth summer camp targeted at some of the top maths secondary school students in the United States, and occasionally from abroad.
  • CFAR Workshops, CFAR's traditional and longest-lasting work, which teaches rationality techniques to individuals. Whole businesses and organizations can sign up for a specialized rationality workshop aimed at improving individual and whole-team performance in the workplace. I don't know much about this, so perhaps it could be classed as a different type of program than regular CFAR workshops.
  • 'CFAR Labs', full-time research work done by some CFAR staff to apply cognitive and behavioural sciences, and trial combining them with other knowledge bases, to discover or create new rationality techniques. These may in the future improve SPARC, typical CFAR workshops, or some new project.
Other rationality 'causes', 'interventions', or whatever we end up calling them, (could) include:

  1. Fixing/improving science: a theme on LessWrong has been the scientific method in practice could be improved on to make tighter feedback loops in advancing knowledge, such as making the use of new statistical methods more standard practice in science, especially in medicine and psychological sciences. What with the replication crisis, and decades worth of research and what the public thought was scientific progress thrown into question, this issue seems more important than ever. Science no longer might be merely improved, but really does need to be 'fixed'. While neglecting this issue doesn't seem like it won't lower the absolute level of the sanity waterline, over time the compounding effects of both not correcting false conclusions, and little or no confidence in contemporary methodological research to answer important questions, progress in raising the sanity waterline will eventually slow or reverse. This cause seems to me very important, and I'd guess relatively neglected. I don't know how tractable it is.
  2. Spreading skepticism

    2a. Spreading scientific skepticism and literacy: people like James Randi have spent decades challenging charlatans and spreading attendant memes on debunking claims of psychic powers and the like. They've overlapped with scientists who've debunked pseudoscience, fake methods parading as 'scientific' rather than claiming to be magic, such as climate-change denial, medical quackery, conspiracy theories, and classic nonsense like astrology. I'd expect the biggest gains so far, in terms of expected value, to be from the debunking of useless or actively harmful alternative medicine, or making important policy like climate change mitigation more likely to be legislated and implemented. I'd guess this is important, generally crowded but neglected in some spaces, and somewhat tractable. Of course, we could assess this better.

    2b. Spreading religious skepticism: religion is a source of irrationality. Since the rise of New Atheism in the post-9/11 world, the work of people like Richard Dawkins, Sam Harris, the late Christoper Hitchens, Michael Shermer (who also spreads skepticism on topics besides religion) and Daniel Dennett have reinvigorated atheism in a way making it much more like a typical social movement than it has been in the past. The importance of this cause could be better quantified by trying to evaluate the types and magnitude of harms caused by religion. My guess is spreading religious skepticism isn't neglected, but perhaps much more could be done, as it seems its methods are far from optimized.

    2c. Spreading political skepticism/rationality: This section got very long, and I didn't want to interrupt the topic here, so I've published my thoughts in a separate blog post.

    3. Education reform: it was brought to my attention this is a focus area for rationality as well. Arguably, this is by far the most important cause. Everything else here is really a type of education, so reforming education right at the source, primary and secondary schools, would be the most scalable. However, at least in the United States, this area is incredibly crowded. Really, it's so crowded a space it's several times bigger than everything else mentioned here. I recall several years ago Givewell also spent much time investigating education reform, and concluded it wasn't very tractable.

    "Improving education" is commonly about increasing test scores, as championed by Bill Gates. The advantage of working with other causes mentioned above is they're a section of the public who more-or-less understands why and how rationality isn't just increased intelligence, let alone a proxy for intelligence in adolescents like test scores. "Critical thinking" divorced from test scores is also crowded in education reform. I'd predict significant challenges for the rationality community in communicating how they differ from all other education reform efforts on a mass scale. So, school reform on a mass scale seems a no-go, at least for the foreseeable future. Trialing and then scaling successful rationality programs to various schools, like SPARC, and then making them a special curriculum in the current paradigm,  eventually becoming something like the International Baccalaureate program, while still challenging, seems more neglected and tractable. These would be smaller-scale interventions predicated on saner students going on in their lives and careers to improve the world and raise the rationality waterline further.

     4. Media projects: this isn't really a cause in its own right, so much as a consideration cutting across all causes. What I mean is spreading rationality is done through one or more popular media, such as books, documentary TV and movies, online articles and videos, ads, or any number of podcast's like Julia Galef's Rationally Speaking podcast. The first step in evaluating the effectiveness of any media project is assessing its level of exposure or traffic. Beyond that, we need to assess how much traction per reader or audience member the intended messages had. This may seem difficult. However, it has been done in effective altruism multiple times. A/B testing of websites is a common way to assess effectiveness online. The Centre for Effective Altruism assessed their marketing efforts for Will MacAskill's book Doing Good Better. Also, Animal Charity Evaluators has assessed the impact of several types of interventions for spreading anti-speciesist attitudes and vegan/vegetarian diets. This includes documentaries and online videos, online ads, and pamphleting. The same could be done for all manner of messaging and media employed by rationalists and skeptics, and the tools to do so, while imperfect, can readily be borrowed from behavioural sciences and business.

    Considering the emphasis placed on rational thinking and the closeness of other movements to the rationality community, which itself places emphasis on checking the impact of personal efforts and the importance of feedback loops, I'm surprised I haven't heard of organizations run by folks like Sam Harris, James Randi, or Richard Dawkins to assess the effectiveness of their efforts. I should most of the studies from EA mentioned above found it difficult to draw strong conclusions due to the difficulty of assessing effective public media. However, unless I'm mistaken, this seems a relatively important and highly neglected cause for raising the sanity waterline.

    Conclusion
    This is what comes to my mind as the major domains for raising the rationality waterline. Another cause brought to my attention was "improved decision-making and forecasting". These efforts have only risen to prominence in the last few years. So, I didn't know where to find summary info on the whole field. It seems perhaps the most important, neglected, and tractable cause in this focus area. So, any treatment of it deserves a blog post in its own right, diving right into it. Also, like 'media projects', decision-making and forecasting are more like cross-cutting methods applicable to object-level domains.

    Decreasing mental health burden, IQ-increasing interventions in the developing world, like correcting nutritional deficiencies, and behavioural modification like better sleep or eating habits among the general public would also raise the sanity waterline. However, they also generally increase intelligence, health, and quality of life. So, they're generic interventions better suited to major foundations ready to pour millions into funding them. More directly impacting and raising the rationality waterline, however that is later operationalized and measured, seems more doable for the rationality community itself, as laid out in the above examples.

    My goal here hasn't been to launch any projects yet, but to start the conversation about, if one wanted to assess the best ways to improve rationality, what are the broad areas we'd look to first.

    *There is no hard divides in CFAR's work outside of SPARC, so it may not be practical to think of CFAR's workshops as one or more "programs". This division is given as an example of how someone could arrange an organization's work to evaluate them as Givewell and other charity evaluators do.

Wednesday, 7 September 2016

Increasing Rationality In Political Thinking

Increasing rationality in political thinking: bad thinking about politics is a source of irrationality as well. At this point in history, while it may be less directly harmful than religion, it's total indirect effects are arguably more consequential than irrationality from religion, due to the unprecedented influence federal governments of large nation-states have on the lives of billions of people, affecting most aspects of their well-being. The importance and neglectedness of this cause will be more obvious to the LessWrong crowd than perhaps other rationalists and skeptics.

There were times in history when this was much more important than it is now. Raising the rationality waterline, or at least having the knowledge of the world we have now in the 1930s and 1940s, could've prevented WWII. It could've prevented the public from following and giving rise to ideologies and parties giving rise to catastrophic regimes. This would've been incredibly important in Russia in the 1910s, China and East Asia in the 1930s through 1960s, much of Europe during the Cold War, and South America, the Middle East, and Africa during the post-colonial era. The Cold War could've been mitigated much better or ended much earlier if hatred and mistrust were replaced with more attempts to coordinate and cooperate between the United States/NATO and the USSR/Warsaw Pact. Mass paranoia, excessive mistrust and fear, and irrationality not only in communist countries, but the extremity of anti-communist sentiment in the West, such as the Red Scare, made the world much more dangerous for everyone than in any point in history. It's difficult to imagine how the world would be both radically different or better if the sanity waterline had been higher at any of these periods of the twentieth century. Personally, I expect this cause may be the most important one in this focus area, as irrational thinking about global catastrophic risks like climate change, Artificial Intelligence, and technology arms races could lead to human extinction.

Another reason to work on this area is simply we're again seeing increasing political division and rancor among the public, which could lead to dire consequences like we saw in 20th century. Illiteracy of economics among the public gives credence to counterproductive policy, or elitist misunderstanding of social science can lead to awful policy or economic recessions. International sectarian conflict such as seen in the Middle East at present, dominating headlines, is a result of both religious and political irrationality. Making this cause tractable, however, seems difficult. First of all, while the skeptics and rationality movements have a consensus on what science we can and should use to spread scientific and religious skepticism, there is much less consensus to on what from science, especially the social sciences, we can use to increase rationality about political thought. Additionally, opinions on what constitutes 'good thinking' in politics are more divided and less scientifically informed, merely because much of the science which would lead to consensus doesn't exist. This can make it very difficult to coordinate groups to pursue a single set of best practices or strategies.

Thursday, 1 September 2016

Update on Upcoming EA Podcast: Format Choices

I've been thinking about the sort of EA podcast I want to produce. Firstly, it might include recording the EA newsletter, but I hope for it to be more in-depth. That is, if reading the newsletter is like watching the local evening news, the podcast I have in mind is like a segment on 60 minutes. It wouldn't be like an audio equivalent of investigative journalism. I don't intend to use the podcast to broadcast original research inspired by the podcast.

I myself sometimes write blog posts on the EA Forum. So, other work I myself do in EA wouldn't be mentioned on the podcast unless the community deems newsworthy. I'd infer this from a project receiving something like a couple dozen votes on the EA Forum, or being shared in the EA Newsletter, or somesuch.

Secondly, the intended audience of the podcast is for people who are already familiar with effective altruism. Unlike Peter Singer's TED talk, TEDx or other talks by EA community leaders on YouTube, or the sort of radio interviews Will MacAskill has done in the past, this podcast will assume you've spent some time engaging with the foundations of effective altruism. The objective of the podcast will be to inform people of what's going on as the scope of activity under the umbrella of effective altruism expands.

I want to get started without having to learn how to do much editing. That would take a while, and I don't want to wait before I get started and after everyone loses excitement for the project. So, I'm going to write scripts for the episodes on EA policy work I'm doing. Being able to rehearse and read from scripts should reduce how much I stutter or pause during the podcast. I've heard this can be a problem in podcasts. I hope to start tomorrow.

My original idea was to record a podcast summarizing the policy work being done in and around the EA community. I thought something like this might take around an hour to listen to. However, I underestimated the amount of EA policy work being done. I think there is content on this topic alone for a couple hours worth of podcast. That's a bit long to start off with, so I'll be breaking down my summary coverage of EA policy work into multiple episodes. I'm breaking it down by the policy work done on respective object-level causes (e.g., poverty alleviation, animal welfare, etc.).

I imagine each of those episodes will be around 20 minutes in length. I don't imagine an episode will be less than 10 minutes or more than 40 minutes, for this first series of episodes.

Wednesday, 6 July 2016

Friends of DEAM

With the rise of memes, we've seen the rise of other meme pages on Facebook. Be sure to follow them if you want more hilarity!

Clippius Maximus is an AI agent trying to maximize the expected number of paper clips in the universe. "Clippy" shares details of his attempts to maximize the number of paperclips as it relates to ongoing research in machine intelligence and covers paperclip-related events happening around the world.

Marvin the Malicious Mosquito is a mosquito locked in a zero-sum competition with the human species, to see who can wipe out who first. He hates us all, and constantly shitposts to this effect.

Radical Rationalist Memes is a meme page inspired by LessWrong and the rationalist community, and shares memes related to its tropes and core ideas.

Trolley Problem Memes is exactly what it says, and is very popular. Here is an interview between Linchuan Zhang and the creators of Trolley Problem Memes on what inspired them to create the memes, and their own philosophical and ethical inclinations.