Thursday 25 February 2016

What Should I Blog About?

I've been thinking about what topics I should blog about. I recently read The Sense of Style by Steven Pinker, and one if his pieces of writing advice was to know one's audience. There are plenty of things I personally want to write about. Of course, most of the things I want to write about most are longer research projects. I have plenty of time and urge to write shorter or lighter pieces in between more major essays such as "Major Types of Global Risks". So, I'm open to taking feedback on things I should write about in my spare time, as my readers would find them useful. My readers are, at the moment, mostly just a large group of my friends. Feel free to comment and provide feedback on this post here, or on any other site you encounter it. If there is something in particular you'd like to see me write about, let me know. Things in particular I was thinking of writing about:

1. A Guide to Making Memes

Yeah, this one is a completely serious suggestion. I mean, there isn't much serious about making Internet memes. But I've become somewhat notorious for making memes, and I'm surprised by this. I'm surprised because while others are impressed with my meme output, from the inside, it feels rather easy. I think becoming good at making memes is easier than lots of people think, and I could write some pointers for how to get started.

2. "How to Make Reddit Work For You"

While on the topic of wasting time on the Internet: Reddit. I've noticed over the last few years there are a lot of people who think of Reddit distastefully, because they've had bad experiences on there, or they've heard so many bad things about it. Some of these reasons are because there is a pernicious culture on Reddit, of flame wars propagated by a dank hive of neckbeards, and nary any subreddit, no matter how isolated can avoid it. Or something like that. I don't know why people really avoid Reddit, and I don't much care. However, it's a great platform that gets a bad rap for ideas associated with it.

I've optimized my use of Reddit. Whenever I visit Reddit, I only have good experiences. It's all about subreddit subscription management. Of course, plenty of users do this. I want to write a simple guide for how one can render Reddit not only benign instead of pernicious, not just boring instead of aggravating, but actually useful and interesting and exciting and sometimes amazing.  Essentially, I've figured out how to make Reddit into my virtual beautiful bubble, an enclave on the Internet which doesn't suck, and I want to show others how to do so for themselves.

3. Rationality and Effective Altruism Explainers

One thing I find quite enjoyable, and I am willing to spend my time doing, is to provide explainers on all sorts of topics in the rationality and effective altruism communities. Now, I'm not just talking about thought experiments, or heuristics and biases, one can look up on Google or Wikipedia to find out how they work. All subcultures, rationality and effective altruism included, gradually developed their own quirks. Sometimes there are weird quirks and cultural trends, idiosyncratic pieces of history, which can only be gleaned through procedural knowledge and a wide variety of sources. A confusion about these cannot always be solved by googling. Sometimes these questions can only be answered, or at least answered simply and clearly, based on experience. I've been in each of these communities for several years, so I think I usually have the experience to satisfactorily answer these questions. If I don't, I'll at least know someone who does, so I can forward the question along to them. Also, I have a decent memory, better than most, and a willingness to explain things in great detail. For example, look how long this blog post, just about other blog posts I might write, is. That's lots of details. I'm a thorough guy.

So, feel free to ask me questions about anything related to rationality or effective altruism, or to explain my weird eclectic opinions on any specific subject therein.

Friday 19 February 2016

Major Types of Global Risks

Summary: In terms of which global risks we should currently be directing resources to research and mitigate, I've reached the same conclusion as the Future of Life Institute and the perspective laid out in 'Global Catastrophic Risks' (2008). That is, risks from emerging or existing technologies which could critically threaten the stability of human civilization are the ones to prioritize. In this I include most risks which include a anthropogenic factor of component, such as climate change. I describe what I currently perceive as the most plausible mechanism for catastrophic harm for each of these type of risks. I give treatment to other risks, and conclude systemic (e.g., macroeconomic, socio-political) instability and nanotechnology are both types of risks which themselves don't currently pose a global risks, but, for different reasons, each ought to remain on the radar of an integrated assessment of global catastrophic risks. 

Since I've started dedicating serious time and thought to thinking about global catastrophic risks (GCRs), I've assumed there are four types of risks humanity faces and should dedicate the bulk of its risk-mitigation efforts to. However, I realized just because this is my perspective doesn't mean it's everybody's perspective. So, I should check my assumptions. They are below. This is also an invitation to voice disagreement, or suggest other risks I should take more seriously.

Nuclear Weapons

I'm assuming the inclusion of nuclear war is so obvious it doesn't warrant further explanation. For the record, aside from nuclear war, I include situations in which one or more nuclear weapons are detonated. Here is an excerpt from Global Catastrophic Risks (2008) detailing nuclear risks which don't begin as war[1].
  • Dispersal of radioactive material by conventional explosives ('dirty bomb')
  •  Sabotage of nuclear facilities 
  •  Acquisition of fissile material leading to the fabrication and detonation of a crude nuclear bomb ('improvised nuclear device') 
  •  Acquisition and detonation of an intact nuclear weapon
  •  The use of some means to trick a nuclear state into launching a nuclear strike.
(Anthropogenic) Environmental and Climate Change

 The tail risks of climate change could pose a global catastrophe. However, there seems other potential GCRs as a result of environmental change caused by human activity, but aren't also the result of increased atmospheric concentrations of CO2 and other greenhouse gases. Such risks possibly include peak phosphorus, soil erosion, widespread crop failure, scarcity of drinkable water, pollinator decline, and other threats to global food security not related to climate change. There are also potential ecological crises, such as a critical lack of biodiversity. Whether biodiversity or wild life are intrinsically valuable, and whether humanity ought to care about the welfare and/or continued existence of species other than itself are normative questions which are orthogonal to my current goals in thinking about GCRs. However, it's possible the mass extinction of other species will harm ecosystems in a way which proves catastrophic to humanity regardless of how much we care about things other than our own well-being. So, it's worth paying some attention to such environmental risks regardless.

When we talk about climate change, typically we're thinking about anthropogenic climate change, i.e., climate change influenced or induced by human action. However, there are a variety of other GCRs, such as nuclear fallout, asteroid strikes, supervolcanoes, and extreme radiation exposure, which would result in a sort of "naturally" extreme climate change. Additionally, these GCRs, alongside systemic risks and social upheaval, could disturb agriculture. Therefore, it seems prudent to ensure the world have a variety of contingency plans for long-term food and agricultural security, even if we don't rate anthropogenic climate change as a very pressing GCR.


Biosecurity Risks 

When I write "biosecurity", I mostly have in mind either natural or engineered epidemics and pandemics. If you didn't know, a pandemic is an epidemic of worldwide proportions. Anyway, while humanity in the past has endured many epidemics, with how globally interconnected civilization is in the modern era, there is more of a risk than ever before of epidemics spreading worldwide. Also, other changes in the twenty-first century seem like they greatly increase the risk of major epidemics, such as the rise of antibiotic resistance among infectious pathogens. However, there is a more dire threat: engineered pandemics. As biotechnology becomes both increasingly powerful and available over time, there will be more opportunity to edit pathogens so they spread more readily, cause higher mortality rates, or are less susceptible to medical intervention. This could be the result of germ warfare or bioterrorism. Note a distinct possibility that what an offending party intends as only a limited epidemic may unintentionally metastasize into a global pandemic. Scientists may also produce a potentially catastrophic pathogen which is then either released by accident, or stolen and released into the environment by terrorists.

Other potential biosecurity risks include the use of biotechnology or genetic modification that threatens global food security, or is somehow able to precipitate an ecological crisis. As far as I know, less thought has been put into these biosecurity risks, but the consensus assessment also seems to be they're less threatening than the risk of a natural or engineered pandemic.

In recent months, the world has become aware of the potential of 'gene drives'. At this point, I won't comment on gene drives at length. Suffice to say I consider them a game-changer for all considerations of biosecurity risk assessment and mitigation, and I intend to write at least one full post with my thoughts on them in the near future.

Artificial Intelligence Risks

2015 is the year awareness of safety and security risks from Artificial Intelligence (AI) went "mainstream". The basic idea is smarter-than-human AI, also referred to as Artificial General Intelligence (AGI), or machine/artificial superintelligence (MSI, ASI, or just "superintelligence") could be so unpredictable and powerful once it's complete humanity wouldn't be able to stop it. If a machine or computer program could not only outsmart humanity but think several orders of magnitude faster than humanity, it could quickly come to control civilizational resources in ways putting humanity at risk. The difference in intelligence between you and an AGI as feared isn't the same as difference between you and Albert Einstein. When concern over AI safety is touted, it's usually in the vein of a machine smarter than humanity to the degree you're smarter than an ant. Likewise, the fear is, then, AGI might be so alien and unlike humanity in its thinking that by default it might think of extinguishing humanity not as any sort of moral problem, but a nuisance at the same level of concern you give to an ant you don't even notice stepping on when you walk down the street. Technology which AGI might use to extinguish humanity is various types of robotics, or gaining control of other dangerous technologies mentioned, nuclear weapons or biotechnology.

While there are plenty of opinions on when AGI will arrive, and what if any threats to humanity it will pose, concern for certain sorts of AI risks is warranted even if you don't believe risks from machines generally smarter than humans are something to worry about in the present. "Narrow AI" is AI which excels in one specific, but not all domains. Thus, while narrow AI doesn't pose danger on its own, or does anything close to what humans would call thinking for itself, computer programs using various types of AI are tools which could either be weaponized, or which could accidentally cause a catastrophe, much like nuclear technology today. Artificial General Intelligence isn't necessary for the development of autonomous weapons, such as drones which rain missiles from the sky to selectively kill millions, or to justify the fear of potential AI arms races. Indeed, an AI arms race, much like the nuclear arms race during the Cold War, might be the very thing which ends up pushing AI to the point of general intelligence, which humanity might then lose control of. Thus, preventing an AI arms race could be doubly important. Other near-term (i.e., in the next couple decades) development in AI might pose risks even if they're not intended to be weaponized. For example, whether its through the rise of robotics putting the majority of human employment at risk, or losing control of the computers behind algorithmic stock trading, human action may no longer be the primary factor determining the course of the global economy humanity has created.

Monday 15 February 2016

"Why does effective altruism buy bed nets instead of investing in gene drives [to eliminate mosquitoes forever]?"

Summary: a friend asked me this question on Facebook. I ended up writing a rather extensive response, and I believe this question will be asked again, so I've written up this answer to be shared as needed. The short answer is effective altruism is a social movement of thousands of middle-class individuals who aren't coordinated enough to invest tens of millions of dollars into a single project, and billionaire investors or philanthropists associated with the movement appear for various reasons not inclined to invest in independent research of this sort. Moreover, while gene drives show promise as technology to eliminate mosquitoes, figuring out the best way to fund research to ensure the best outcome is difficult. 

1. Despite appearances, effective altruism isn't yet a coordinated elite movement that can design top-down solutions to the all the world's problems we want to solve. We're bottom-up movement of viewers just like you.

2. Well, if a few thousand middle-class members of the effective altruism movement can't fund research into gene drives to eliminate mosquitoes, why not all these billionaires like Peter Thiel and Elon Musk we see in the headlines? Well, they're more into exclusively funding blue-sky projects and moonshots to save the world. They're the sorts of folks who would fund an EA-style project to use gene drives. Musk, Thiel and the other Silicon Valley billionaires are willing to fund innovation they control themselves, but not the innovation others can provide. It's a Catch-22. This is a problem the effective altruism community hasn't solved yet. Other examples of this trend include:



  • Elon Musk and Peter Thiel collectively giving one billion dollars over the next several decades to OpenAI, while neither of them has ever given more than $500,000 in one year to the already-existing Machine Intelligence Research Institute, which may be the primary organization on Earth responsible for making AI safety a legitimate cause rather than sci-fi doomsday nonsense.
  • Google establishing biotech company Calico to run the anti-aging revolution in-house, rather than giving more than a relative pittance of their own money to fund the research at the Strategies for Engineering Negligible Senescence (SENS), which laid much of the groundwork which makes a bold organization like Calico seem plausible in the first place.

If you know a source of or a solution to the problem of billionaire bias whereby they're confident they can save the world by themselves because they're already such supergeniuses, rather than somewhat lucky run-of-the-mill geniuses, and declining to sufficiently fund scientists who aren't billionaires only because they spent the last decade building the fields of research these billionaires have just started paying attention to, we'd be eternally grateful.

Once this problem is solved, not only will we get started on the gene drives, but we'll do the rest of it too.

3. In the meantime, the two billionaires who are more on board with effective altruism are the foundation Good Ventures. Again, the effective altruism movement:

  • A. isn't at the scale yet where it can realistically undo bans on DDT[1], if that was a policy we were pursuing, but dang it, it's trying. Seriously, it's beginning to experiment with influencing federal policies in the United States in the last 18 months, but this takes a lot longer than running a RCT for patch solutions like bednets. EA isn’t jumping headlong into influencing any and all policies, because it wants to first find its feet in policy areas which it's confident it can stably influence and which are well within the Overton window, like domestic criminal justice reform.
  • B.  Open Phil is looking into funding breakthroughs in scientific research, but it turns out evaluating how to do this strategically, rather than rationalizing coincidental externally-positive innovations and claiming it's a miracle of the peer-review system or whatever, is difficult. So, it takes time to have calibrated confidence in how to make things like gene drives or geoengineering or whatever go off without a hitch. If you think it's easy, go apply to be a researcher at the Open Philanthropy Project. They're hiring.
  • C. In the mean time, members of organizations like Giving What We Can and the supporters of Givewell-recommended charities see value in continually donating a couple thousand dollars to AMF here or there because malaria kills hundreds of thousands every year right now, and it's unrealistic for them to think if they spend three years putting that money into their savings account they can build a nest egg which will allow them to bankroll a several-million dollar research program.
Gene drives started existing, like, what six months ago? Why isn't *anyone*, or *everyone* funding this yet? Why didn't the United Nations pass an accord to receive one trillion dollars in funding from the governments of the world to research gene drives the day after this op-ed came out? I would ask you to be more patient with the effective altruism movement. It’s not an omnipotent community.

[1] In a prior comment in the discussion, my interlocutor mentioned how if effective altruism wasn’t willing to invest in gene drives, it could at least try to do something which would more effectively eradicated mosquitoes and prevent the spread the diseases they bear. He stated undoing bans on DDT might be able to accomplish this better than the current strategy of purchasing LLINs on the cheap via AMF. It is this comment I’m referencing. I neither endorse nor condemn a policy of undoing bans on DDT, and I make no claim recommending the effective altruism movement or any associated organization begin doing so.

Sunday 14 February 2016

Ding, Dong, The Witch is a Red Herring

In the last couple days, United States Supreme Court Justice Antonin Scalia died. If you don't know who that is, then feel free to look up who he is. You might be reading this post on a date later than the date it was published, when I share it again, in the wake of the death of another public figure. It doesn't matter to the point I'm about to make that it was Antonin Scalia who just died.

Anyway, there are many Americans who don't like Antonin Scalia because he was one of the most conservative justices on the Supreme Court while he served, and voted on many cases in many ways they didn't like. Apparently, lots of people on social media and across the blogosphere are celebrating his death because they didn't like him, and others still are chiding them for celebrating someone's death when that is never appropriate. Others still might be using this opportunity as a bully pulpit to push another agenda. I'm seeing the second-order effects of the debate in my own Facebook news feed, as I currently live in Canada rather than the United States, and am thus removed from most of the hubbub. I don't want to one-up anyone. I just want a record of my thoughts to exist so I don't have to think this through or explain it in full again the next time it comes up.

In the wake of the death of Margaret Thatcher, the former Prime Minister of the United Kingdom, many self-identified liberals or progressives were glad at the news of her death. Maybe most people didn't display this sort of gauche, but a proportion of Internet users did. In his essay "I Can Tolerate Anything Except the Outgroup", Scott Alexander of Slate Star Codex made the following observation about this phenomenon:

The worst reaction I’ve ever gotten to a blog post was when I wrote about the death of Osama bin Laden. I’ve written all sorts of stuff about race and gender and politics and whatever, but that was the worst.
I didn’t come out and say I was happy he was dead. But some people interpreted it that way, and there followed a bunch of comments and emails and Facebook messages about how could I possibly be happy about the death of another human being, even if he was a bad person? Everyone, even Osama, is a human being, and we should never rejoice in the death of a fellow man. One commenter came out and said:
I’m surprised at your reaction. As far as people I casually stalk on the internet (ie, LJ and Facebook), you are the first out of the “intelligent, reasoned and thoughtful” group to be uncomplicatedly happy about this development and not to be, say, disgusted at the reactions of the other 90% or so.
This commenter was right. Of the “intelligent, reasoned, and thoughtful” people I knew, the overwhelming emotion was conspicuous disgust that other people could be happy about his death. I hastily backtracked and said I wasn’t happy per se, just surprised and relieved that all of this was finally behind us.
And I genuinely believed that day that I had found some unexpected good in people – that everyone I knew was so humane and compassionate that they were unable to rejoice even in the death of someone who hated them and everything they stood for.
Then a few years later, Margaret Thatcher died. And on my Facebook wall – made of these same “intelligent, reasoned, and thoughtful” people – the most common response was to quote some portion of the song “Ding Dong, The Witch Is Dead”. Another popular response was to link the videos of British people spontaneously throwing parties in the street, with comments like “I wish I was there so I could join in”. From this exact same group of people, not a single expression of disgust or a “c’mon, guys, we’re all human beings here.”
gently pointed this out at the time, and mostly got a bunch of “yeah, so what?”, combined with links to an article claiming that “the demand for respectful silence in the wake of a public figure’s death is not just misguided but dangerous”.
And that was when something clicked for me.
You can talk all you want about Islamophobia, but my friend’s “intelligent, reasoned, and thoughtful people” – her name for the Blue Tribe – can’t get together enough energy to really hate Osama, let alone Muslims in general. We understand that what he did was bad, but it didn’t anger us personally. When he died, we were able to very rationally apply our better nature and our Far Mode beliefs about how it’s never right to be happy about anyone else’s death.
On the other hand, that same group absolutely loathed Thatcher. Most of us (though not all) can agree, if the question is posed explicitly, that Osama was a worse person than Thatcher. But in terms of actual gut feeling? Osama provokes a snap judgment of “flawed human being”, Thatcher a snap judgment of “scum”.
Apparently what others took from this is when someone is celebrating the death of a public figure in ways which are hypocritical, it's appropriate to point out their hypocrisy. What I took from this is to conclude in resignation that people won't stop celebrating the death of public figures outside their political tribe, and I can't stop them, so I choose not to fight that battle. People are just going to celebrate the deaths of public figures in their hated outrgoup no matter what I do. I can't stop them. Rather than trying to force everyone to cooperate towards solemnly respecting the deaths of public figures, I figure we just have to let everyone in opposing groups take the occasional potshots at each other's heroes. While this hurts rather than helps the easing of intergroup conflict, it's hardly the most egregious escalation of rhetorical warfare. It's not my priority to quell every insignificant battle, especially when I think most typical strategies don't work.

This may be an unpopular opinion. I'm not standing up for what I think is right. I am not standing up for what is just or true. I'm not a coward so much as I'm cynical my efforts will be worth much when I expect I'm no less liable than anyone else to provoke a local flame war, while failing to prevent or change the behaviour of thousands of others across the Internet with whom I'll never interact. However, in a week, this instance of hypocrisy will subside, and the news and social media will grasp onto whatever new flavour of outrage is in vogue. There will be hundreds more skirmishes over who is the most righteous, and who is the most hypocritical, in yet another culture war. There are more and more culture wars all the time these days. Why I opt out of these wars is because I don't think preventing the local Nash equilibrium of two sides insulting one another is worth my time when there isn't also an opportunity to teach someone the only way to win these signaling games is not to play.  I'm saving my effort for those opportunities, which seem to be rare but I really wish in my everyday life were more common. I would if I could manufacture them, but I don't know how to. If you do, please let me know.

"Celebrating the deaths of public figures belonging to the nearest outgroup" is part of the broader trend of displaying some sort of outrage as a way to signal loyalty or as a badge of membership in some favoured ingroup. On the Internet, at least, this is in turn a way of getting infected by competing thought-germs:



These pernicious and insidious memes are using each of us, and each of our minds, as vectors to fight their battles for them. Just as much as they make us their soldiers, they make us their weapons as we become more mindless in our verbal violence. There will be times too when I am a hypocrite, as I'll unwittingly fall into the trap of entering this sort of debate, without being aware of it. I might even be the most offending part who is the first to fire a shot in the newest skirmish. However, whenever I gain the self-awareness to notice when my mind is hijacked, I will try to notice the pattern, and then I will stop participating in that particular instance of doomed debate entirely. I will not be a soldier in these wars if I can afford it. I am a conscientious objector. I hope you will be too.

 I will opt out of either side of these types of wars, because the true generals are ideas that will never die unless we stop paying attention to them. I am angling for world peace. I won't wade in as a peacekeeper in every skirmish to teach in one particular case how two belligerents are acting mindlessly. I would match rather teach people the broader trend of how politics can be a mind-killer, or at least to think about politics is to think in hard mode. Scott himself of Slate Star Codex might see the proxy war among humans between dueling thought-germs as one aspect of the monster known as Moloch. I have some friends who would call my strategy an attempt to avoid becoming the pawn of an egregore, wherever it came from. Egregores are an interesting concept I have much to learn about and I may introduce at greater length at a future time. I can't do them justice without a proper intro, and I won't do this here as my post winds down.

I'm not trying to one-up anyone by saying I'll tolerate anybody, including the outgroup, and also hypocritical members of the ingroup. Even if it seems like I'm trying to do that, I don't mean to come across that way. However, all it might take to express tolerance it to not stop people from being themselves, including when they're celebrating the death of a public enemy who in other circles is a venerated figure. This is my last commentary on the topic, and I will say nothing else regardless.

Friday 12 February 2016

Listening to Podcasts While Working: I Can't Even...

In your experience, does listening to a podcast in the background while you're doing other work on the computer distract you from either absorbing the content of the podcast, reading something, or typing up what you're working on? I think listening to music might decrease my productivity in typing things up, but not reading on the computer. However, I think it puts me in the zone and keeps me from getting distracted from things besides what I'm working on, so I'll take that tradeoff.

However, this hasn't been my experience when listening to podcasts. Well, in particular, I'll listen to lectures on YouTube in a tab in the background. However, when I've done this in the past, I've had negative results. Really, sometimes I'll be reading an article while listening to a lecture or interview in the background, one hour will pass, and I'll realize I don't remember a single thing that was said by the speaker(s). Alternatively, if I consciously focus on listening to the lecture/interview while my eyes continue to scan, I'll realize I've completed moving my eyes over an entire paragraph without absorbing any of the content. It seems focusing on two mediums of intellectual content at the same time is too intense a form of multitasking for my brain to handle. Focusing on one simply wrecks my ability to focus on the other, to nearly a complete degree. Does anyone else have this experience? Do you know any ways around it?

Tuesday 2 February 2016

My Effective Altruism Origin Story

Effective altruism is a destination each arrives at by way of a unique path. As such, it's always interesting to here one's origin story. Here's mine.

So, there's this guy who lives across the street from me, right? One day he was like "hey man, you should check out this blog". I was like "seems legit". The rest is history.