Tuesday 8 November 2016

"Be Yourself" Is Crap Advice

A lot of self-esteem culture my generation faced felt like cognitive-behavioural therapy doled out by adults who had no training in how to sensitively administer a therapy program to anyone in particular, let alone children. Therefore, it sucked and was at best disconcerting and confusing. At least for me. Anyway, telling kids to be themselves is a substanceless and noisy truism which shames kids for engaging in what seems to me a well-adapted, strategic, unconscious, evolved behaviour: trying to fit in. And for the kids who find it difficult to fit in, "just be yourself" is transparent circular reasoning. The real answer is adults don't know have well-tailored heuristics for fitting in, uniquely matching the specialness of one single person, let alone a child. Be honest with children. When the disciplinary action of a teacher, parent, or other authority figure fails to cease signaling games among children, it's better for the adult to answer a child's question of "how do I become less awkward?" with, "I'm not sure. Tell me more.". From there, listen to the child's messy reality and see it through their eyes, and figure out a plan together.

I've been told to be myself dozens of time, as if beforehand I was the most awkward kid in my class only because I was wearing the skinsuit made from another child and was pretending to be someone else. "Be yourself" is the hollow turd of truisms.

Sunday 23 October 2016

Any Good Arguments For A Particular God?

I was watching this video from Crash Course Philosophy today.


After watching it, I realized these theodices and other solutions to problems posed by the existence of a god are sufficient to convince me there could be a creator of the universe. By that, I mean philosophy alone is sufficient to convince me I cannot logically convince myself there definitely isn't a force or agent external to the universe which created the universe. Of course, I take possibilities of the simulation argument seriously as well. I don't want to call such a creator a 'god', because gods always have anthropic interests: Earth, humanity, and our fate always figure in the concerns of gods. Unless we have special reason to believe such a being cares about anything in particular in a simulation rather than just running a simulation in general, that simulator isn't anything like a god. I don't want to call it a god for pragmatic reasons; I'm afraid, for example, if we somehow discovered we really are living in a simulation and deigned to call it a god, theists of all stripes would come out of the woodwork to claim and rationalize the simulator as proof of their god.

Of course, that I don't think I can reject the existence of all and any gods on principle alone doesn't mean I don't reject the existence of any god in practice. Science and history are sufficient to convince me no particular god of any religion exists, or any religion is true, where philosophy alone fails. I was thinking of something, though: are there good arguments for the existence of any particular god that rule out the existence of all other gods (or particular interpretations of any one god)? For example, theodices resolving the Problem of Evil seem to me they could apply to not just the Christian interpretation of the one God ("Jehovah"), but to the Jewish or Islamic interpretations as well ("YHWH", and "Allah", respectively). Are there are any arguments you've heard, that you couldn't knock down, that favour the existence not just of some god, any god, but a particular god?

Tuesday 27 September 2016

Causes For Raising the Rationality Waterline?

Summary: It appears 'rationality', more than just a narrow cause, could be a focus area as diverse and broad as others like 'global poverty' or 'global catastrophic risks'. I discuss how it should be possible to better evaluate the impact of these efforts the way Givewell and other parties in the effective altruism movement assess interventions in other causes. Note I draw heavily on effective altruism for examples because of my familiarity with them, but 'increasing rationality' as a cause working for all kinds of motivations and movements. Finally, I broadly summarize ongoing and potential efforts to improve thinking in society which fit into this cause.

Effective altruism (EA) has introduced lots of people to the 'important/neglected/tractable' framework (INT framework) for evaluating causes, a heuristic for relatively quickly ruling in or out whole cause areas for doing good. Typically, this framework will be applied in a 'shallow review' of a whole cause before a 'deep dive' is done to learn a whole lot more about it, who the actors are, and who's doing the best work. This is necessary for EA because resources like person-hours available for analyzing causes are limited. We can't do deep dives to investigate each and every cause before we act, because that'd take so long we'd never act at all.

Anyway, I sense this INT framework could be useful in domains outside effective altruism as well. In particular, the EA-adjacent rationality community has become sizable enough it's not just raising the sanity waterline for individuals, like people reading LessWrong alone, or even groups, like whatever lifestyle experiments happen in the Bay Area and other rationalist enclaves, but just in general. We see this with the Center For Applied Rationality (CFAR).

EA doesn't evaluate merely whole organizations, but individual projects being pursued by those organizations. For example, one of Givewell's top recommendations is the Deworm the World Initiative (DtWI), which is not its own organization, but rather a single program run by Evidence Action, which runs other programs. These other programs aren't among Givewell's top recommendations. This isn't to say they shouldn't happen, but they're not as impactful and/or neglected as DtWI, and the marginal dollars of effective altruists and other donors are better suited to DtWI.

We see this with current efforts at CFAR as well. The Open Philanthropy Project (Open Phil) recently made a grant to CFAR specifically for their SPARC program, rather than for their general operating funds (Edit: since the original draft of this post, Open Phil has made an additional grant to CFAR for general operations). It could be said CFAR is currently running three different programs*:

  • SPARC, an in-depth summer camp targeted at some of the top maths secondary school students in the United States, and occasionally from abroad.
  • CFAR Workshops, CFAR's traditional and longest-lasting work, which teaches rationality techniques to individuals. Whole businesses and organizations can sign up for a specialized rationality workshop aimed at improving individual and whole-team performance in the workplace. I don't know much about this, so perhaps it could be classed as a different type of program than regular CFAR workshops.
  • 'CFAR Labs', full-time research work done by some CFAR staff to apply cognitive and behavioural sciences, and trial combining them with other knowledge bases, to discover or create new rationality techniques. These may in the future improve SPARC, typical CFAR workshops, or some new project.
Other rationality 'causes', 'interventions', or whatever we end up calling them, (could) include:

  1. Fixing/improving science: a theme on LessWrong has been the scientific method in practice could be improved on to make tighter feedback loops in advancing knowledge, such as making the use of new statistical methods more standard practice in science, especially in medicine and psychological sciences. What with the replication crisis, and decades worth of research and what the public thought was scientific progress thrown into question, this issue seems more important than ever. Science no longer might be merely improved, but really does need to be 'fixed'. While neglecting this issue doesn't seem like it won't lower the absolute level of the sanity waterline, over time the compounding effects of both not correcting false conclusions, and little or no confidence in contemporary methodological research to answer important questions, progress in raising the sanity waterline will eventually slow or reverse. This cause seems to me very important, and I'd guess relatively neglected. I don't know how tractable it is.
  2. Spreading skepticism

    2a. Spreading scientific skepticism and literacy: people like James Randi have spent decades challenging charlatans and spreading attendant memes on debunking claims of psychic powers and the like. They've overlapped with scientists who've debunked pseudoscience, fake methods parading as 'scientific' rather than claiming to be magic, such as climate-change denial, medical quackery, conspiracy theories, and classic nonsense like astrology. I'd expect the biggest gains so far, in terms of expected value, to be from the debunking of useless or actively harmful alternative medicine, or making important policy like climate change mitigation more likely to be legislated and implemented. I'd guess this is important, generally crowded but neglected in some spaces, and somewhat tractable. Of course, we could assess this better.

    2b. Spreading religious skepticism: religion is a source of irrationality. Since the rise of New Atheism in the post-9/11 world, the work of people like Richard Dawkins, Sam Harris, the late Christoper Hitchens, Michael Shermer (who also spreads skepticism on topics besides religion) and Daniel Dennett have reinvigorated atheism in a way making it much more like a typical social movement than it has been in the past. The importance of this cause could be better quantified by trying to evaluate the types and magnitude of harms caused by religion. My guess is spreading religious skepticism isn't neglected, but perhaps much more could be done, as it seems its methods are far from optimized.

    2c. Spreading political skepticism/rationality: This section got very long, and I didn't want to interrupt the topic here, so I've published my thoughts in a separate blog post.

    3. Education reform: it was brought to my attention this is a focus area for rationality as well. Arguably, this is by far the most important cause. Everything else here is really a type of education, so reforming education right at the source, primary and secondary schools, would be the most scalable. However, at least in the United States, this area is incredibly crowded. Really, it's so crowded a space it's several times bigger than everything else mentioned here. I recall several years ago Givewell also spent much time investigating education reform, and concluded it wasn't very tractable.

    "Improving education" is commonly about increasing test scores, as championed by Bill Gates. The advantage of working with other causes mentioned above is they're a section of the public who more-or-less understands why and how rationality isn't just increased intelligence, let alone a proxy for intelligence in adolescents like test scores. "Critical thinking" divorced from test scores is also crowded in education reform. I'd predict significant challenges for the rationality community in communicating how they differ from all other education reform efforts on a mass scale. So, school reform on a mass scale seems a no-go, at least for the foreseeable future. Trialing and then scaling successful rationality programs to various schools, like SPARC, and then making them a special curriculum in the current paradigm,  eventually becoming something like the International Baccalaureate program, while still challenging, seems more neglected and tractable. These would be smaller-scale interventions predicated on saner students going on in their lives and careers to improve the world and raise the rationality waterline further.

     4. Media projects: this isn't really a cause in its own right, so much as a consideration cutting across all causes. What I mean is spreading rationality is done through one or more popular media, such as books, documentary TV and movies, online articles and videos, ads, or any number of podcast's like Julia Galef's Rationally Speaking podcast. The first step in evaluating the effectiveness of any media project is assessing its level of exposure or traffic. Beyond that, we need to assess how much traction per reader or audience member the intended messages had. This may seem difficult. However, it has been done in effective altruism multiple times. A/B testing of websites is a common way to assess effectiveness online. The Centre for Effective Altruism assessed their marketing efforts for Will MacAskill's book Doing Good Better. Also, Animal Charity Evaluators has assessed the impact of several types of interventions for spreading anti-speciesist attitudes and vegan/vegetarian diets. This includes documentaries and online videos, online ads, and pamphleting. The same could be done for all manner of messaging and media employed by rationalists and skeptics, and the tools to do so, while imperfect, can readily be borrowed from behavioural sciences and business.

    Considering the emphasis placed on rational thinking and the closeness of other movements to the rationality community, which itself places emphasis on checking the impact of personal efforts and the importance of feedback loops, I'm surprised I haven't heard of organizations run by folks like Sam Harris, James Randi, or Richard Dawkins to assess the effectiveness of their efforts. I should most of the studies from EA mentioned above found it difficult to draw strong conclusions due to the difficulty of assessing effective public media. However, unless I'm mistaken, this seems a relatively important and highly neglected cause for raising the sanity waterline.

    Conclusion
    This is what comes to my mind as the major domains for raising the rationality waterline. Another cause brought to my attention was "improved decision-making and forecasting". These efforts have only risen to prominence in the last few years. So, I didn't know where to find summary info on the whole field. It seems perhaps the most important, neglected, and tractable cause in this focus area. So, any treatment of it deserves a blog post in its own right, diving right into it. Also, like 'media projects', decision-making and forecasting are more like cross-cutting methods applicable to object-level domains.

    Decreasing mental health burden, IQ-increasing interventions in the developing world, like correcting nutritional deficiencies, and behavioural modification like better sleep or eating habits among the general public would also raise the sanity waterline. However, they also generally increase intelligence, health, and quality of life. So, they're generic interventions better suited to major foundations ready to pour millions into funding them. More directly impacting and raising the rationality waterline, however that is later operationalized and measured, seems more doable for the rationality community itself, as laid out in the above examples.

    My goal here hasn't been to launch any projects yet, but to start the conversation about, if one wanted to assess the best ways to improve rationality, what are the broad areas we'd look to first.

    *There is no hard divides in CFAR's work outside of SPARC, so it may not be practical to think of CFAR's workshops as one or more "programs". This division is given as an example of how someone could arrange an organization's work to evaluate them as Givewell and other charity evaluators do.

Wednesday 7 September 2016

Increasing Rationality In Political Thinking

Increasing rationality in political thinking: bad thinking about politics is a source of irrationality as well. At this point in history, while it may be less directly harmful than religion, it's total indirect effects are arguably more consequential than irrationality from religion, due to the unprecedented influence federal governments of large nation-states have on the lives of billions of people, affecting most aspects of their well-being. The importance and neglectedness of this cause will be more obvious to the LessWrong crowd than perhaps other rationalists and skeptics.

There were times in history when this was much more important than it is now. Raising the rationality waterline, or at least having the knowledge of the world we have now in the 1930s and 1940s, could've prevented WWII. It could've prevented the public from following and giving rise to ideologies and parties giving rise to catastrophic regimes. This would've been incredibly important in Russia in the 1910s, China and East Asia in the 1930s through 1960s, much of Europe during the Cold War, and South America, the Middle East, and Africa during the post-colonial era. The Cold War could've been mitigated much better or ended much earlier if hatred and mistrust were replaced with more attempts to coordinate and cooperate between the United States/NATO and the USSR/Warsaw Pact. Mass paranoia, excessive mistrust and fear, and irrationality not only in communist countries, but the extremity of anti-communist sentiment in the West, such as the Red Scare, made the world much more dangerous for everyone than in any point in history. It's difficult to imagine how the world would be both radically different or better if the sanity waterline had been higher at any of these periods of the twentieth century. Personally, I expect this cause may be the most important one in this focus area, as irrational thinking about global catastrophic risks like climate change, Artificial Intelligence, and technology arms races could lead to human extinction.

Another reason to work on this area is simply we're again seeing increasing political division and rancor among the public, which could lead to dire consequences like we saw in 20th century. Illiteracy of economics among the public gives credence to counterproductive policy, or elitist misunderstanding of social science can lead to awful policy or economic recessions. International sectarian conflict such as seen in the Middle East at present, dominating headlines, is a result of both religious and political irrationality. Making this cause tractable, however, seems difficult. First of all, while the skeptics and rationality movements have a consensus on what science we can and should use to spread scientific and religious skepticism, there is much less consensus to on what from science, especially the social sciences, we can use to increase rationality about political thought. Additionally, opinions on what constitutes 'good thinking' in politics are more divided and less scientifically informed, merely because much of the science which would lead to consensus doesn't exist. This can make it very difficult to coordinate groups to pursue a single set of best practices or strategies.

Thursday 1 September 2016

Update on Upcoming EA Podcast: Format Choices

I've been thinking about the sort of EA podcast I want to produce. Firstly, it might include recording the EA newsletter, but I hope for it to be more in-depth. That is, if reading the newsletter is like watching the local evening news, the podcast I have in mind is like a segment on 60 minutes. It wouldn't be like an audio equivalent of investigative journalism. I don't intend to use the podcast to broadcast original research inspired by the podcast.

I myself sometimes write blog posts on the EA Forum. So, other work I myself do in EA wouldn't be mentioned on the podcast unless the community deems newsworthy. I'd infer this from a project receiving something like a couple dozen votes on the EA Forum, or being shared in the EA Newsletter, or somesuch.

Secondly, the intended audience of the podcast is for people who are already familiar with effective altruism. Unlike Peter Singer's TED talk, TEDx or other talks by EA community leaders on YouTube, or the sort of radio interviews Will MacAskill has done in the past, this podcast will assume you've spent some time engaging with the foundations of effective altruism. The objective of the podcast will be to inform people of what's going on as the scope of activity under the umbrella of effective altruism expands.

I want to get started without having to learn how to do much editing. That would take a while, and I don't want to wait before I get started and after everyone loses excitement for the project. So, I'm going to write scripts for the episodes on EA policy work I'm doing. Being able to rehearse and read from scripts should reduce how much I stutter or pause during the podcast. I've heard this can be a problem in podcasts. I hope to start tomorrow.

My original idea was to record a podcast summarizing the policy work being done in and around the EA community. I thought something like this might take around an hour to listen to. However, I underestimated the amount of EA policy work being done. I think there is content on this topic alone for a couple hours worth of podcast. That's a bit long to start off with, so I'll be breaking down my summary coverage of EA policy work into multiple episodes. I'm breaking it down by the policy work done on respective object-level causes (e.g., poverty alleviation, animal welfare, etc.).

I imagine each of those episodes will be around 20 minutes in length. I don't imagine an episode will be less than 10 minutes or more than 40 minutes, for this first series of episodes.

Wednesday 6 July 2016

Friends of DEAM

With the rise of memes, we've seen the rise of other meme pages on Facebook. Be sure to follow them if you want more hilarity!

Clippius Maximus is an AI agent trying to maximize the expected number of paper clips in the universe. "Clippy" shares details of his attempts to maximize the number of paperclips as it relates to ongoing research in machine intelligence and covers paperclip-related events happening around the world.

Marvin the Malicious Mosquito is a mosquito locked in a zero-sum competition with the human species, to see who can wipe out who first. He hates us all, and constantly shitposts to this effect.

Radical Rationalist Memes is a meme page inspired by LessWrong and the rationalist community, and shares memes related to its tropes and core ideas.

Trolley Problem Memes is exactly what it says, and is very popular. Here is an interview between Linchuan Zhang and the creators of Trolley Problem Memes on what inspired them to create the memes, and their own philosophical and ethical inclinations. 


Friday 22 April 2016

Geoengineering Is a Variant of Terraforming

This morning, I've read about both terraforming and geoengineering. Terraforming is essentially converting other planets to livable conditions for humans. That is, we could live on them unaided by advanced technology, as humans have lived on Earth for hundreds of thousands of years. Eventually, the ultimate goal would be for fine-tuned terraforming to make other planets not just livable, but ideal for human conditions. Geoengineering is the practice of altering the Earth's climate to make it more livable for humans, assuming anthropogenic climate change may or will make life harder for many, many millions of people later this century.

So, first of all, if you zoom in real close, it seems geoengineering is a form of terraforming. That is, humans have the power to transform a planet's superterranean conditions, Earth, Mars, or other, at rates astronomically faster than what would happen naturally. We can do this in a way that makes such planets either more or less livable for humans, in terms of matters of degree. The condition of the climate, of the land, air, and seas, for the thousands of years over which humans evolved, could be considered more-or-less ideal conditions for our survival.



Projecting into the future, making the conditions on other planets a climactic paradise, as once was the case in the human past, goes beyond ideal into the idyllic. This is an ideal version of any planet humans could live on, including Earth, based on a past version of Earth. Call this human vision for a planet: "Terra". In this sense, Earth is, not now, Terra. Earth isn't like the ideal conditions it once was for humanity, for all the species we've lived with for our entire history and prehistory. It won't be like that in the future. If humanity disappeared from the Earth today, the climate won't return to those conditions for thousands of years. With geoengineering, Earth may return to those conditions faster, but it will probably take much longer than a typical human lifespan for that to happen. So, geoengineering seems to be a relatively minute, early version of terraforming. This also implies, depending on how the future goes, that one day Mars will be more livable than Earth. This seems very unlikely, but, in something like 2,000 years, if terraforming Mars goes very well, and geoengineering and climate change on Earth go very poorly, Mars could have the better climate, and be the world you'd prefer to live on. I'll leave you with that to ponder.

Is This How Elon Musk Thinks Spacex Will Get Us To Mars?

Until recently, I've sense an indiscrepancy between how Elon Musk is running Spacex on a day-to-day basis, and his lofty ambitions for the commercialization of space exploration to precipitate the colonization of space. The latter only comes out in interviews. I haven't been able to connect the dots between, impressive as it is, launching rockets to supply the ISS leads to colonizing Mars.

Then I saw the footage of Spacex's rocket successfully landing on the drone ship. I think I have a greater sense of Musk's vision.

Everything Spacex has done so far, just sending rockets to the ISS? That isn't their business plan, or their business model, for the long-term. That isn't the final stage. The final stage won't be, either, just sending all manner of satellites up to space, or sending rockets up for asteroid mining, even though that too may come to pass. No, in landing a rocket on an automated drone ship, I realize Spacex is trying to not only go from 0-to-1, but then also from 1-to-n. Spacex is trying to do it all.

However many years from now, when everything they do is perfected, Spacex is going to scale, marvelously. When there are dozens of rockets taking off and landing on drone ships year after year, it's really going to expand our imagination and ambition of what can be done with ships taking off from Earth. Things that might still strike most people as too lofty a goal today, like getting so many rockets beyond the moon, or Earth's orbit, will become common-sense extensions of what Spacex does. Then, the process of expanding what human space exploration is really capable of, as a commerical, scientific, and humanistic endeavour, will be iterated, again and again, until we're landing ships on Mars.

I'm not saying all this is going to happen. I just think I finally understand think from Elon Musk's point of view.

Thursday 21 April 2016

What Does Japan Think of Cultural Appropriation?

In the last couple years, much has been made of cultural appropriation, particularly of artifacts and trends from non-white cultures by white people[1]. While I remember it being an issue for years, it's only since the beginning of 2015, with a rise of contemporary social justice awareness on the internet, that cultural appropriation became a hot topic everyone was commenting on.

Anyway, one manifestation of cultural appropriation is the wearing of kimonos outside of Japan, by people not of Japanese descent. I remember several months ago some friends shared articles about this on Facebook. I also saw a a response written about how, supposedly, this isn't seen as a big issue, or as an affront to Japanese culture, in Japan. In particular, the sale of kimonos overseas is what's keeping producers in kimonos in business, as the sale of the garments has declined in Japan.
Another recent controversy, not exactly cultural appropriation, but on whitewashing in Hollywood, is about the casting and production choices for the American live-action adaptation of classic manga/anime Ghost in the Shell.

It appears the cast will largely be composed of white actors, while in the original media the characters are all Japanese. This isn't unusual though; while adapting anime is uncommon, Hollywood casting white and American actors for the American adaptation of a foreign film, for roles that were originally played by non-white/non-American actors, is common. Some people were upset about this, but it wasn't making headlines everywhere.
What's got people upset lately is there are rumours the studio behind the new film toyed with CGI effects to make the white actors cast in the film appear 'more Asian'. The rumours also purported that Scarlett Johanson, cast to play the protagonist in the film, was one of the actors for whom the effects were used on in post-production. The studio responded that the effects were never used on Scarlett Johanson in particular, only on actors playing more minor roles in the film, and that the studio immediately scrapped the idea as soon as they saw the results, as they appeared to garish at any rate.

So, now the backlash against the producers of this American adaptation, and Hollywood whitewashing in particular, is back in full force. Wanting to learn more about this, I read about the controversy on Wikipedia. The first section eminently made sense to me, from the perspective of Hollywood's critics.
The casting of Johansson in the lead role caused accusations of whitewashing, especially from fans of the original Japanese franchise. As it is still unclear if Johansson's character will retain her Japanese name, fans have argued that changing both the Japanese setting and main character's name to make the film a complete cultural adaptation would be a wiser decision. 
However, the part about Japanese fans', and the studio which produced the original Ghost in the Shell, surprised me.
In Japan, fans of the manga were surprised that the casting caused controversy with many already assuming that the Hollywood production would choose a white actress in the lead role. Sam Yoshiba, director of the international business division at Kodansha's Tokyo headquarters said, "Looking at her career so far, I think Scarlett Johansson is well cast. She has the cyberpunk feel. And we never imagined it would be a Japanese actress in the first place... This is a chance for a Japanese property to be seen around the world."
I've read the full article that quote from taken from, and it doesn't inform much more than the gist above. Reactions in Japan ranged from somewhat disappointed to nonplussed, with the sentiment from Mr. Yoshiba's quote above expanded upon with some even musing the casting of a white actor in the role, other things being equal, will give the film wider appeal than if a Japanese(-American) actor were cast in the role. Overall, it seems the reaction of Japanese fans has been rather muted compared to criticism from North America and Europe.
Now, I haven't taken the time to read about or discuss with others about contemporary discourse on cultural appropriation. It seems one of these polarizing issues where everyone feels entitled to an opinion, and it might just be a bunch of noise that won't help me form a well-informed opinion myself. I know lots of Americans of all backgrounds are upset about cultural appropriation one way or another. I don't know much about what the populations of countries in Africa, Asia, or South America think of cultural appropriation. However, the two times I've encountered what people from Japan think of cultural appropriation, it appears they're not all that outraged about it.

First, it happened with kimonos, then with the casting of white actors in the live-action Ghost in the Shell adaptation. This is only two examples, but does this trend generalize? Does anyone have insight into what the people of Japan think of cultural appropriation of Japanese culture by Americans?

[1] My perception of the topic is exclusively informed by cultural appropriation in North America, where I'm from and where I've read coverage of it. I'm not familiar with how the issue of cultural appropriation plays out or is perceived outside of Canada and the United States.

Monday 14 March 2016

Confusion on OpenAI's relationship to the field of AI safety

It was my impression OpenAI is concerned about AI safety, since it's backed by and founded by Elon Musk and Sam Altman who have expressed their concern for AI risks, and when interviewed on the topic of OpenAI, Musk and Altman made clear they think OpenAI's work, mission, and policies will bring the world closer to AGI while also ensuring it's safe. Multiple times now people have told me OpenAI isn't working on or about AI safety. I think what they mean is "AI safety" is something Nate points out in this article:
http://futureoflife.org/2016/01/02/safety-engineering-target-selection-and-alignment-theory/ Safety engineering, target selection, and and alignment research are 3 types of technical work/research, and more broadly, strategy research, moral-theory/machine-ethics, and collaboration-building are part of AI safety as well. So, it seems when people like Rob Bensinger or Victoria Krakovna tell me OpenAI doesn't seem like it will be doing AI safety in the near future, they won't be focusing on any of these areas. It seems to me OpenAI is, among other things, fostering alignment and capabilities research for AI in general without an explicit focus on the safety aspect. The 'open-source' component of OpenAI seems to be an effort towards creating a groundwork for strategy research or collaboration-building. Perhaps OpenAI is assuming safety engineering is part and parcel of capabilities research, and or perhaps that OpenAI can, with its influence in AI research in general, nudge capabilities and alignment research in the direction of safety concerns as well. My model of OpenAI's reasoning is that if all their research is the main force spurring capabilities research, it being open-source for everyone will level the playing field, not allowing any one company or other entity get ahead of the game without being examined by others, safety-concerned or not, and thus safety research can be injected into capabilities research in a broad way. Meanwhile, it seems folks like Scott Alexander, Jim Babcock and others have put forward that this approach is insufficient to precipitate AI safety research so it isn't outpaced by the relevant capabilities research, as it doesn't need to be a malicious entity, or one making philosophical failures, which makes AI dangerous, but technical failures in implementing an AGI which would also make it dangerous.
A month ago I made a big list of who is working on AI safety, as far as I could tell, and I included organizations like Google DeepMind and OpenAI because they've expressed a strong concern for AI safety and are now very big players in the field of AI more generally. Now, I'm understanding what people mean when they say OpenAI may not have anything to do with AI safety, because they're greatly mistaken about what AI safety really requires. So, I can exclude them from future versions of my list, or mention them but include caveats when I turn it into a proper discursive article. However, that still leaves the problem that some of the biggest players in AI in general think what they're doing will help with AI safety but it may actually make the future of AI more dangerous, and other big players in the field like Google might not worry about safety in their own AI research because they feel organizations like DeepMind and OpenAI have them covered. If this is the case, then it seems the mistaken understanding of the technical nature of AI safety needs to become as diffuse as increased awareness of it. This is a problem which needs solving.

Saturday 12 March 2016

The Death of the Republican Party Has Been A Long Time Coming

Everyone in the media is talking about how the Republican Party is collapsing. I think it's just become undeniable the Republican Party cannot be what it once was, what it wants to be, and what it was hoping it still might be. I don't think the collapse of the Republican Party started in 2016, or in 2014 or 2012 during the elections during which the Tea Party swept into success. I think the Republican Party started collapsing after the United States started gradually losing faith in their competence after the second term of George W. Bush. I think W. was the worst president in modern American history, at least since Nixon. As a Canadian, I have a different perspective, so maybe most Americans would disagree with me, but I think W. was even worse than Nixon. I'd venture W. might be the worst president in the post-War era, and I only say that because the pre-War era was a sufficiently different time, and also because I don't know the history well enough, to feel comfortable making predictions which go before that. I don't think it would be too hard for someone who knew their American history better to convince me W. was the worst president of the last 100 years, and one of the worst of all time.

Sure, there are plenty of Americans who feel every politician and Obama as well are wiping their butts with the Constitution, but there is still enough of a coherent core left of centre that the Democratic Party hasn't been imploding for the last 8 years, and isn't imploding right now.

Thomas Jefferson once wrote: "The tree of liberty must be refreshed from time to time with the blood of patriots and tyrants." As a modernist who has observed the quality of life in the developed world achieve wonders we're all lucky to have, I'm much opposed to bloody revolutions which would threaten all that is stable in democracies with elections that still work. However, I am glad Donald Trump is causing a crisis and a revolution in the Republican Party that is causing the spirit of the old guard to stare into the face of its doom. To cleanse the establishment so seems important enough sometimes I think it's worth the trade of putting Trump in the White House if the Republican Party dies. Barring authoritarian nightmare scenarios, Trump will only be the crazy President of the United States for four years until he's booted out in 2020, and then the Republican Party will be dead. As long as Trump can't succeed in violating human rights left, right and centre, his winning the Republican nomination for the Presidency might be good insofar as this will destroy the Republican Party and it will prevent a dangerous fundamentalist like W. from returning to the White House for a very long time as well.

Thursday 3 March 2016

On 'rationalist', 'effective altruist', and Labels-As-Personal-Identities

In a conversation about other things, Julia Galef of the Center For Applied Rationality mentioned as an aside she doesn't identify as an 'EA'. Of course, this is presumed by all to be shorthand for 'effective altruist'. I've been reading for a while these undercurrents that the identity of 'effective altruist', as a noun, as a personal label anyone can freely choose for themselves rather than a category they may or may not match, waters down what 'EA' really means, or could mean, and is becoming problematic. Of course, there are those who would say building this into some sort of ingroup-y, tribe-like identity has always been a problem, perhaps since its conception. It's a lot of the same problem many express with just about anyone identifying with the term 'rationalist', and that profession being accepted as long as that person can send the right signals, i.e., a surface-level understanding of the relevant memes, to the rest of the self-identified 'rationalist' community.

I know Rob Bensinger has for a long time expressed a preference for people referring to themselves as 'aspiring rationalists', or 'aspiring effective altruists'. I think this won't work, as that's so long a phrase as to strike most as unwieldy in casual parlance. At best, I think people will shrug, assume others will know the 'aspiring' is implied and implicitly tacked onto the use of the terms 'EA', 'effective altruists', and/or 'rationalists'. Of course, others *won't* or *don't* assume that, and then somewhere in each of our minds we assume we're 'bona fide' EAs, or rationalists, that we're the real deal, whatever that is supposed to mean. I think this has already happened. I don't perceive a way to enforce this norm of thinking of ourselves, not only as individuals, but as a collective, as constantly aspiring to these ideals as an asymptote to be approached but never achieved, us being the limited and epistemically humble humans we are, unless someone like Will MacAskill and/or Eliezer Yudkowsky wrote repeated implied injunctions against anyone referring to themselves as anything but 'aspiring', when relevant.

Ryan Carey once expressed a sentiment that 'effective altruist' is something, as the movement grows, will become a shorthand for those who are involved with the global community, the 'intellectual and social movement', that call itself "effective altruism". He predicted the term 'effective altruist' will come to refer to people who identify with it, without being especially problematic. This will happen in the same way 'Democrat' or 'Republicans' refers to Americans who identify with particular political parties, without anyone assuming someone affiliated with one party or the other being for democracy and against republics, or visa-versa. I rule this prediction is correct, and has already come to pass. I thus recommend people stop making as big a deal about how the term 'effective altruist' is used. I'm not as enmeshed with the rationality community, but for policy on what to think of and how to use the label and personal identity of 'rationalist', I defer to Scott Alexander's essay 'Why I Am Not Rene Descartes'.

Most people probably haven't noticed this, but I have stopped tending to use the term 'effective altruist'. Sometimes, in discourse when everyone else is using it, I feel forced to use the shorthand 'EA', or 'EAs'. It's just convenient and I don't want to break my or anyone else's flow. However, I mentally think of this as meaning a 'community member'. That is, an 'EA' is, to me, someone who has chosen to in some capacity be involved with the body of other people known as 'effective altruism'. The extent to which one is an 'EA' is the extent to which one involves themselves in this community. A 'hardcore EA' is someone who has perhaps made their involvement in effective altruism their primary community, as opposed to some other social category, subculture, social movement, etc. The abbreviation composed of two letters, 'EA', implies this, without implying one is necessarily someone is particularly effective, altruistic, or effectively altruistic. Some people who identify as EAs will not be particularly effective or as altruistic as they ought to be, and some who explicitly eschew the label will match in their actions the ideals of effective altruism better than most. In this sense I pragmatically accept 'EA'-as-community-member, while banishing from my personal lexicon thinking of some people as truly or real 'effective altruists', and others not being so. There are just individual humans who take actions, and ever bigger groups of them, be them subcultures, companies, or nations, who coordinate some of their actions or beliefs towards common goals. When there is greater coordination among greater number of peoples, it's to varying extents useful and efficient to think of them as a unified, cohesive, atomic agent.


I know this puts me in a position which may strike most as odd. I'm just putting my thoughts down for reference. However, I hope people will remove from the marker in their brain labelled 'EA' or 'effective altruist' that there's a strong correlation or implication that anyone who uses that term to refer to themselves as automatically way more effective, more altruistic, more rational, or otherwise better than anyone else they meet in life. There may be a weak correlation there, but to the extent you interact with individuals, EA, rationalist, or otherwise, get to know them first.

Evan's Dictionary

Sometimes I make up words. I think making up words is a valid strategy, especially for writers discussing relatively novel ideas, or novel perspectives on old ideas, if the majority of humans who would end up reading or hearing them would agree the meaning assigned to the word intuitively works. Thus, here are the words which, while I don't claim exclusive title to adding them to the English lexicon, seem rarely used before I thought of them myself. Additions to this dictionary are ongoing for the indefinite future. Suggested additions from friends of useful words they've made up and/or think I should also use, especially as the Evantionary see below) gains clout, are welcome and encouraged.

Dankularity: noun. A memetic dankularity is a joke and possibly a real prediction I've posited of some given subculture becoming primarily dominated by dank memes and other dankness-related phenomena as opposed to the domination of all other factors combined. 'Dankularity' as a word and idea is inspired by the idea of a technological singularity, as opposed to 'singularity' as its used in physics or other scientific disciplines.

Endarkenment: noun. The opposite of 'enlightenment'. Verb form: (to) endarken. Adj/Adv. form: endarken

Endodus: noun. The opposite of 'exodus'.

Evantionary: proper noun. A term I myself (Evan) and others may use to refer to 'Evan's Dictionary' (this here document). A portmanteau of 'Evan's' and 'Dictionary'.

Thursday 25 February 2016

What Should I Blog About?

I've been thinking about what topics I should blog about. I recently read The Sense of Style by Steven Pinker, and one if his pieces of writing advice was to know one's audience. There are plenty of things I personally want to write about. Of course, most of the things I want to write about most are longer research projects. I have plenty of time and urge to write shorter or lighter pieces in between more major essays such as "Major Types of Global Risks". So, I'm open to taking feedback on things I should write about in my spare time, as my readers would find them useful. My readers are, at the moment, mostly just a large group of my friends. Feel free to comment and provide feedback on this post here, or on any other site you encounter it. If there is something in particular you'd like to see me write about, let me know. Things in particular I was thinking of writing about:

1. A Guide to Making Memes

Yeah, this one is a completely serious suggestion. I mean, there isn't much serious about making Internet memes. But I've become somewhat notorious for making memes, and I'm surprised by this. I'm surprised because while others are impressed with my meme output, from the inside, it feels rather easy. I think becoming good at making memes is easier than lots of people think, and I could write some pointers for how to get started.

2. "How to Make Reddit Work For You"

While on the topic of wasting time on the Internet: Reddit. I've noticed over the last few years there are a lot of people who think of Reddit distastefully, because they've had bad experiences on there, or they've heard so many bad things about it. Some of these reasons are because there is a pernicious culture on Reddit, of flame wars propagated by a dank hive of neckbeards, and nary any subreddit, no matter how isolated can avoid it. Or something like that. I don't know why people really avoid Reddit, and I don't much care. However, it's a great platform that gets a bad rap for ideas associated with it.

I've optimized my use of Reddit. Whenever I visit Reddit, I only have good experiences. It's all about subreddit subscription management. Of course, plenty of users do this. I want to write a simple guide for how one can render Reddit not only benign instead of pernicious, not just boring instead of aggravating, but actually useful and interesting and exciting and sometimes amazing.  Essentially, I've figured out how to make Reddit into my virtual beautiful bubble, an enclave on the Internet which doesn't suck, and I want to show others how to do so for themselves.

3. Rationality and Effective Altruism Explainers

One thing I find quite enjoyable, and I am willing to spend my time doing, is to provide explainers on all sorts of topics in the rationality and effective altruism communities. Now, I'm not just talking about thought experiments, or heuristics and biases, one can look up on Google or Wikipedia to find out how they work. All subcultures, rationality and effective altruism included, gradually developed their own quirks. Sometimes there are weird quirks and cultural trends, idiosyncratic pieces of history, which can only be gleaned through procedural knowledge and a wide variety of sources. A confusion about these cannot always be solved by googling. Sometimes these questions can only be answered, or at least answered simply and clearly, based on experience. I've been in each of these communities for several years, so I think I usually have the experience to satisfactorily answer these questions. If I don't, I'll at least know someone who does, so I can forward the question along to them. Also, I have a decent memory, better than most, and a willingness to explain things in great detail. For example, look how long this blog post, just about other blog posts I might write, is. That's lots of details. I'm a thorough guy.

So, feel free to ask me questions about anything related to rationality or effective altruism, or to explain my weird eclectic opinions on any specific subject therein.

Friday 19 February 2016

Major Types of Global Risks

Summary: In terms of which global risks we should currently be directing resources to research and mitigate, I've reached the same conclusion as the Future of Life Institute and the perspective laid out in 'Global Catastrophic Risks' (2008). That is, risks from emerging or existing technologies which could critically threaten the stability of human civilization are the ones to prioritize. In this I include most risks which include a anthropogenic factor of component, such as climate change. I describe what I currently perceive as the most plausible mechanism for catastrophic harm for each of these type of risks. I give treatment to other risks, and conclude systemic (e.g., macroeconomic, socio-political) instability and nanotechnology are both types of risks which themselves don't currently pose a global risks, but, for different reasons, each ought to remain on the radar of an integrated assessment of global catastrophic risks. 

Since I've started dedicating serious time and thought to thinking about global catastrophic risks (GCRs), I've assumed there are four types of risks humanity faces and should dedicate the bulk of its risk-mitigation efforts to. However, I realized just because this is my perspective doesn't mean it's everybody's perspective. So, I should check my assumptions. They are below. This is also an invitation to voice disagreement, or suggest other risks I should take more seriously.

Nuclear Weapons

I'm assuming the inclusion of nuclear war is so obvious it doesn't warrant further explanation. For the record, aside from nuclear war, I include situations in which one or more nuclear weapons are detonated. Here is an excerpt from Global Catastrophic Risks (2008) detailing nuclear risks which don't begin as war[1].
  • Dispersal of radioactive material by conventional explosives ('dirty bomb')
  •  Sabotage of nuclear facilities 
  •  Acquisition of fissile material leading to the fabrication and detonation of a crude nuclear bomb ('improvised nuclear device') 
  •  Acquisition and detonation of an intact nuclear weapon
  •  The use of some means to trick a nuclear state into launching a nuclear strike.
(Anthropogenic) Environmental and Climate Change

 The tail risks of climate change could pose a global catastrophe. However, there seems other potential GCRs as a result of environmental change caused by human activity, but aren't also the result of increased atmospheric concentrations of CO2 and other greenhouse gases. Such risks possibly include peak phosphorus, soil erosion, widespread crop failure, scarcity of drinkable water, pollinator decline, and other threats to global food security not related to climate change. There are also potential ecological crises, such as a critical lack of biodiversity. Whether biodiversity or wild life are intrinsically valuable, and whether humanity ought to care about the welfare and/or continued existence of species other than itself are normative questions which are orthogonal to my current goals in thinking about GCRs. However, it's possible the mass extinction of other species will harm ecosystems in a way which proves catastrophic to humanity regardless of how much we care about things other than our own well-being. So, it's worth paying some attention to such environmental risks regardless.

When we talk about climate change, typically we're thinking about anthropogenic climate change, i.e., climate change influenced or induced by human action. However, there are a variety of other GCRs, such as nuclear fallout, asteroid strikes, supervolcanoes, and extreme radiation exposure, which would result in a sort of "naturally" extreme climate change. Additionally, these GCRs, alongside systemic risks and social upheaval, could disturb agriculture. Therefore, it seems prudent to ensure the world have a variety of contingency plans for long-term food and agricultural security, even if we don't rate anthropogenic climate change as a very pressing GCR.


Biosecurity Risks 

When I write "biosecurity", I mostly have in mind either natural or engineered epidemics and pandemics. If you didn't know, a pandemic is an epidemic of worldwide proportions. Anyway, while humanity in the past has endured many epidemics, with how globally interconnected civilization is in the modern era, there is more of a risk than ever before of epidemics spreading worldwide. Also, other changes in the twenty-first century seem like they greatly increase the risk of major epidemics, such as the rise of antibiotic resistance among infectious pathogens. However, there is a more dire threat: engineered pandemics. As biotechnology becomes both increasingly powerful and available over time, there will be more opportunity to edit pathogens so they spread more readily, cause higher mortality rates, or are less susceptible to medical intervention. This could be the result of germ warfare or bioterrorism. Note a distinct possibility that what an offending party intends as only a limited epidemic may unintentionally metastasize into a global pandemic. Scientists may also produce a potentially catastrophic pathogen which is then either released by accident, or stolen and released into the environment by terrorists.

Other potential biosecurity risks include the use of biotechnology or genetic modification that threatens global food security, or is somehow able to precipitate an ecological crisis. As far as I know, less thought has been put into these biosecurity risks, but the consensus assessment also seems to be they're less threatening than the risk of a natural or engineered pandemic.

In recent months, the world has become aware of the potential of 'gene drives'. At this point, I won't comment on gene drives at length. Suffice to say I consider them a game-changer for all considerations of biosecurity risk assessment and mitigation, and I intend to write at least one full post with my thoughts on them in the near future.

Artificial Intelligence Risks

2015 is the year awareness of safety and security risks from Artificial Intelligence (AI) went "mainstream". The basic idea is smarter-than-human AI, also referred to as Artificial General Intelligence (AGI), or machine/artificial superintelligence (MSI, ASI, or just "superintelligence") could be so unpredictable and powerful once it's complete humanity wouldn't be able to stop it. If a machine or computer program could not only outsmart humanity but think several orders of magnitude faster than humanity, it could quickly come to control civilizational resources in ways putting humanity at risk. The difference in intelligence between you and an AGI as feared isn't the same as difference between you and Albert Einstein. When concern over AI safety is touted, it's usually in the vein of a machine smarter than humanity to the degree you're smarter than an ant. Likewise, the fear is, then, AGI might be so alien and unlike humanity in its thinking that by default it might think of extinguishing humanity not as any sort of moral problem, but a nuisance at the same level of concern you give to an ant you don't even notice stepping on when you walk down the street. Technology which AGI might use to extinguish humanity is various types of robotics, or gaining control of other dangerous technologies mentioned, nuclear weapons or biotechnology.

While there are plenty of opinions on when AGI will arrive, and what if any threats to humanity it will pose, concern for certain sorts of AI risks is warranted even if you don't believe risks from machines generally smarter than humans are something to worry about in the present. "Narrow AI" is AI which excels in one specific, but not all domains. Thus, while narrow AI doesn't pose danger on its own, or does anything close to what humans would call thinking for itself, computer programs using various types of AI are tools which could either be weaponized, or which could accidentally cause a catastrophe, much like nuclear technology today. Artificial General Intelligence isn't necessary for the development of autonomous weapons, such as drones which rain missiles from the sky to selectively kill millions, or to justify the fear of potential AI arms races. Indeed, an AI arms race, much like the nuclear arms race during the Cold War, might be the very thing which ends up pushing AI to the point of general intelligence, which humanity might then lose control of. Thus, preventing an AI arms race could be doubly important. Other near-term (i.e., in the next couple decades) development in AI might pose risks even if they're not intended to be weaponized. For example, whether its through the rise of robotics putting the majority of human employment at risk, or losing control of the computers behind algorithmic stock trading, human action may no longer be the primary factor determining the course of the global economy humanity has created.

Monday 15 February 2016

"Why does effective altruism buy bed nets instead of investing in gene drives [to eliminate mosquitoes forever]?"

Summary: a friend asked me this question on Facebook. I ended up writing a rather extensive response, and I believe this question will be asked again, so I've written up this answer to be shared as needed. The short answer is effective altruism is a social movement of thousands of middle-class individuals who aren't coordinated enough to invest tens of millions of dollars into a single project, and billionaire investors or philanthropists associated with the movement appear for various reasons not inclined to invest in independent research of this sort. Moreover, while gene drives show promise as technology to eliminate mosquitoes, figuring out the best way to fund research to ensure the best outcome is difficult. 

1. Despite appearances, effective altruism isn't yet a coordinated elite movement that can design top-down solutions to the all the world's problems we want to solve. We're bottom-up movement of viewers just like you.

2. Well, if a few thousand middle-class members of the effective altruism movement can't fund research into gene drives to eliminate mosquitoes, why not all these billionaires like Peter Thiel and Elon Musk we see in the headlines? Well, they're more into exclusively funding blue-sky projects and moonshots to save the world. They're the sorts of folks who would fund an EA-style project to use gene drives. Musk, Thiel and the other Silicon Valley billionaires are willing to fund innovation they control themselves, but not the innovation others can provide. It's a Catch-22. This is a problem the effective altruism community hasn't solved yet. Other examples of this trend include:



  • Elon Musk and Peter Thiel collectively giving one billion dollars over the next several decades to OpenAI, while neither of them has ever given more than $500,000 in one year to the already-existing Machine Intelligence Research Institute, which may be the primary organization on Earth responsible for making AI safety a legitimate cause rather than sci-fi doomsday nonsense.
  • Google establishing biotech company Calico to run the anti-aging revolution in-house, rather than giving more than a relative pittance of their own money to fund the research at the Strategies for Engineering Negligible Senescence (SENS), which laid much of the groundwork which makes a bold organization like Calico seem plausible in the first place.

If you know a source of or a solution to the problem of billionaire bias whereby they're confident they can save the world by themselves because they're already such supergeniuses, rather than somewhat lucky run-of-the-mill geniuses, and declining to sufficiently fund scientists who aren't billionaires only because they spent the last decade building the fields of research these billionaires have just started paying attention to, we'd be eternally grateful.

Once this problem is solved, not only will we get started on the gene drives, but we'll do the rest of it too.

3. In the meantime, the two billionaires who are more on board with effective altruism are the foundation Good Ventures. Again, the effective altruism movement:

  • A. isn't at the scale yet where it can realistically undo bans on DDT[1], if that was a policy we were pursuing, but dang it, it's trying. Seriously, it's beginning to experiment with influencing federal policies in the United States in the last 18 months, but this takes a lot longer than running a RCT for patch solutions like bednets. EA isn’t jumping headlong into influencing any and all policies, because it wants to first find its feet in policy areas which it's confident it can stably influence and which are well within the Overton window, like domestic criminal justice reform.
  • B.  Open Phil is looking into funding breakthroughs in scientific research, but it turns out evaluating how to do this strategically, rather than rationalizing coincidental externally-positive innovations and claiming it's a miracle of the peer-review system or whatever, is difficult. So, it takes time to have calibrated confidence in how to make things like gene drives or geoengineering or whatever go off without a hitch. If you think it's easy, go apply to be a researcher at the Open Philanthropy Project. They're hiring.
  • C. In the mean time, members of organizations like Giving What We Can and the supporters of Givewell-recommended charities see value in continually donating a couple thousand dollars to AMF here or there because malaria kills hundreds of thousands every year right now, and it's unrealistic for them to think if they spend three years putting that money into their savings account they can build a nest egg which will allow them to bankroll a several-million dollar research program.
Gene drives started existing, like, what six months ago? Why isn't *anyone*, or *everyone* funding this yet? Why didn't the United Nations pass an accord to receive one trillion dollars in funding from the governments of the world to research gene drives the day after this op-ed came out? I would ask you to be more patient with the effective altruism movement. It’s not an omnipotent community.

[1] In a prior comment in the discussion, my interlocutor mentioned how if effective altruism wasn’t willing to invest in gene drives, it could at least try to do something which would more effectively eradicated mosquitoes and prevent the spread the diseases they bear. He stated undoing bans on DDT might be able to accomplish this better than the current strategy of purchasing LLINs on the cheap via AMF. It is this comment I’m referencing. I neither endorse nor condemn a policy of undoing bans on DDT, and I make no claim recommending the effective altruism movement or any associated organization begin doing so.