Friday, 22 April 2016

Geoengineering Is a Variant of Terraforming

This morning, I've read about both terraforming and geoengineering. Terraforming is essentially converting other planets to livable conditions for humans. That is, we could live on them unaided by advanced technology, as humans have lived on Earth for hundreds of thousands of years. Eventually, the ultimate goal would be for fine-tuned terraforming to make other planets not just livable, but ideal for human conditions. Geoengineering is the practice of altering the Earth's climate to make it more livable for humans, assuming anthropogenic climate change may or will make life harder for many, many millions of people later this century.

So, first of all, if you zoom in real close, it seems geoengineering is a form of terraforming. That is, humans have the power to transform a planet's superterranean conditions, Earth, Mars, or other, at rates astronomically faster than what would happen naturally. We can do this in a way that makes such planets either more or less livable for humans, in terms of matters of degree. The condition of the climate, of the land, air, and seas, for the thousands of years over which humans evolved, could be considered more-or-less ideal conditions for our survival.



Projecting into the future, making the conditions on other planets a climactic paradise, as once was the case in the human past, goes beyond ideal into the idyllic. This is an ideal version of any planet humans could live on, including Earth, based on a past version of Earth. Call this human vision for a planet: "Terra". In this sense, Earth is, not now, Terra. Earth isn't like the ideal conditions it once was for humanity, for all the species we've lived with for our entire history and prehistory. It won't be like that in the future. If humanity disappeared from the Earth today, the climate won't return to those conditions for thousands of years. With geoengineering, Earth may return to those conditions faster, but it will probably take much longer than a typical human lifespan for that to happen. So, geoengineering seems to be a relatively minute, early version of terraforming. This also implies, depending on how the future goes, that one day Mars will be more livable than Earth. This seems very unlikely, but, in something like 2,000 years, if terraforming Mars goes very well, and geoengineering and climate change on Earth go very poorly, Mars could have the better climate, and be the world you'd prefer to live on. I'll leave you with that to ponder.

Is This How Elon Musk Thinks Spacex Will Get Us To Mars?

Until recently, I've sense an indiscrepancy between how Elon Musk is running Spacex on a day-to-day basis, and his lofty ambitions for the commercialization of space exploration to precipitate the colonization of space. The latter only comes out in interviews. I haven't been able to connect the dots between, impressive as it is, launching rockets to supply the ISS leads to colonizing Mars.

Then I saw the footage of Spacex's rocket successfully landing on the drone ship. I think I have a greater sense of Musk's vision.

Everything Spacex has done so far, just sending rockets to the ISS? That isn't their business plan, or their business model, for the long-term. That isn't the final stage. The final stage won't be, either, just sending all manner of satellites up to space, or sending rockets up for asteroid mining, even though that too may come to pass. No, in landing a rocket on an automated drone ship, I realize Spacex is trying to not only go from 0-to-1, but then also from 1-to-n. Spacex is trying to do it all.

However many years from now, when everything they do is perfected, Spacex is going to scale, marvelously. When there are dozens of rockets taking off and landing on drone ships year after year, it's really going to expand our imagination and ambition of what can be done with ships taking off from Earth. Things that might still strike most people as too lofty a goal today, like getting so many rockets beyond the moon, or Earth's orbit, will become common-sense extensions of what Spacex does. Then, the process of expanding what human space exploration is really capable of, as a commerical, scientific, and humanistic endeavour, will be iterated, again and again, until we're landing ships on Mars.

I'm not saying all this is going to happen. I just think I finally understand think from Elon Musk's point of view.

Thursday, 21 April 2016

What Does Japan Think of Cultural Appropriation?

In the last couple years, much has been made of cultural appropriation, particularly of artifacts and trends from non-white cultures by white people[1]. While I remember it being an issue for years, it's only since the beginning of 2015, with a rise of contemporary social justice awareness on the internet, that cultural appropriation became a hot topic everyone was commenting on.

Anyway, one manifestation of cultural appropriation is the wearing of kimonos outside of Japan, by people not of Japanese descent. I remember several months ago some friends shared articles about this on Facebook. I also saw a a response written about how, supposedly, this isn't seen as a big issue, or as an affront to Japanese culture, in Japan. In particular, the sale of kimonos overseas is what's keeping producers in kimonos in business, as the sale of the garments has declined in Japan.
Another recent controversy, not exactly cultural appropriation, but on whitewashing in Hollywood, is about the casting and production choices for the American live-action adaptation of classic manga/anime Ghost in the Shell.

It appears the cast will largely be composed of white actors, while in the original media the characters are all Japanese. This isn't unusual though; while adapting anime is uncommon, Hollywood casting white and American actors for the American adaptation of a foreign film, for roles that were originally played by non-white/non-American actors, is common. Some people were upset about this, but it wasn't making headlines everywhere.
What's got people upset lately is there are rumours the studio behind the new film toyed with CGI effects to make the white actors cast in the film appear 'more Asian'. The rumours also purported that Scarlett Johanson, cast to play the protagonist in the film, was one of the actors for whom the effects were used on in post-production. The studio responded that the effects were never used on Scarlett Johanson in particular, only on actors playing more minor roles in the film, and that the studio immediately scrapped the idea as soon as they saw the results, as they appeared to garish at any rate.

So, now the backlash against the producers of this American adaptation, and Hollywood whitewashing in particular, is back in full force. Wanting to learn more about this, I read about the controversy on Wikipedia. The first section eminently made sense to me, from the perspective of Hollywood's critics.
The casting of Johansson in the lead role caused accusations of whitewashing, especially from fans of the original Japanese franchise. As it is still unclear if Johansson's character will retain her Japanese name, fans have argued that changing both the Japanese setting and main character's name to make the film a complete cultural adaptation would be a wiser decision. 
However, the part about Japanese fans', and the studio which produced the original Ghost in the Shell, surprised me.
In Japan, fans of the manga were surprised that the casting caused controversy with many already assuming that the Hollywood production would choose a white actress in the lead role. Sam Yoshiba, director of the international business division at Kodansha's Tokyo headquarters said, "Looking at her career so far, I think Scarlett Johansson is well cast. She has the cyberpunk feel. And we never imagined it would be a Japanese actress in the first place... This is a chance for a Japanese property to be seen around the world."
I've read the full article that quote from taken from, and it doesn't inform much more than the gist above. Reactions in Japan ranged from somewhat disappointed to nonplussed, with the sentiment from Mr. Yoshiba's quote above expanded upon with some even musing the casting of a white actor in the role, other things being equal, will give the film wider appeal than if a Japanese(-American) actor were cast in the role. Overall, it seems the reaction of Japanese fans has been rather muted compared to criticism from North America and Europe.
Now, I haven't taken the time to read about or discuss with others about contemporary discourse on cultural appropriation. It seems one of these polarizing issues where everyone feels entitled to an opinion, and it might just be a bunch of noise that won't help me form a well-informed opinion myself. I know lots of Americans of all backgrounds are upset about cultural appropriation one way or another. I don't know much about what the populations of countries in Africa, Asia, or South America think of cultural appropriation. However, the two times I've encountered what people from Japan think of cultural appropriation, it appears they're not all that outraged about it.

First, it happened with kimonos, then with the casting of white actors in the live-action Ghost in the Shell adaptation. This is only two examples, but does this trend generalize? Does anyone have insight into what the people of Japan think of cultural appropriation of Japanese culture by Americans?

[1] My perception of the topic is exclusively informed by cultural appropriation in North America, where I'm from and where I've read coverage of it. I'm not familiar with how the issue of cultural appropriation plays out or is perceived outside of Canada and the United States.

Monday, 14 March 2016

Confusion on OpenAI's relationship to the field of AI safety

It was my impression OpenAI is concerned about AI safety, since it's backed by and founded by Elon Musk and Sam Altman who have expressed their concern for AI risks, and when interviewed on the topic of OpenAI, Musk and Altman made clear they think OpenAI's work, mission, and policies will bring the world closer to AGI while also ensuring it's safe. Multiple times now people have told me OpenAI isn't working on or about AI safety. I think what they mean is "AI safety" is something Nate points out in this article:
http://futureoflife.org/2016/01/02/safety-engineering-target-selection-and-alignment-theory/ Safety engineering, target selection, and and alignment research are 3 types of technical work/research, and more broadly, strategy research, moral-theory/machine-ethics, and collaboration-building are part of AI safety as well. So, it seems when people like Rob Bensinger or Victoria Krakovna tell me OpenAI doesn't seem like it will be doing AI safety in the near future, they won't be focusing on any of these areas. It seems to me OpenAI is, among other things, fostering alignment and capabilities research for AI in general without an explicit focus on the safety aspect. The 'open-source' component of OpenAI seems to be an effort towards creating a groundwork for strategy research or collaboration-building. Perhaps OpenAI is assuming safety engineering is part and parcel of capabilities research, and or perhaps that OpenAI can, with its influence in AI research in general, nudge capabilities and alignment research in the direction of safety concerns as well. My model of OpenAI's reasoning is that if all their research is the main force spurring capabilities research, it being open-source for everyone will level the playing field, not allowing any one company or other entity get ahead of the game without being examined by others, safety-concerned or not, and thus safety research can be injected into capabilities research in a broad way. Meanwhile, it seems folks like Scott Alexander, Jim Babcock and others have put forward that this approach is insufficient to precipitate AI safety research so it isn't outpaced by the relevant capabilities research, as it doesn't need to be a malicious entity, or one making philosophical failures, which makes AI dangerous, but technical failures in implementing an AGI which would also make it dangerous.
A month ago I made a big list of who is working on AI safety, as far as I could tell, and I included organizations like Google DeepMind and OpenAI because they've expressed a strong concern for AI safety and are now very big players in the field of AI more generally. Now, I'm understanding what people mean when they say OpenAI may not have anything to do with AI safety, because they're greatly mistaken about what AI safety really requires. So, I can exclude them from future versions of my list, or mention them but include caveats when I turn it into a proper discursive article. However, that still leaves the problem that some of the biggest players in AI in general think what they're doing will help with AI safety but it may actually make the future of AI more dangerous, and other big players in the field like Google might not worry about safety in their own AI research because they feel organizations like DeepMind and OpenAI have them covered. If this is the case, then it seems the mistaken understanding of the technical nature of AI safety needs to become as diffuse as increased awareness of it. This is a problem which needs solving.

Saturday, 12 March 2016

The Death of the Republican Party Has Been A Long Time Coming

Everyone in the media is talking about how the Republican Party is collapsing. I think it's just become undeniable the Republican Party cannot be what it once was, what it wants to be, and what it was hoping it still might be. I don't think the collapse of the Republican Party started in 2016, or in 2014 or 2012 during the elections during which the Tea Party swept into success. I think the Republican Party started collapsing after the United States started gradually losing faith in their competence after the second term of George W. Bush. I think W. was the worst president in modern American history, at least since Nixon. As a Canadian, I have a different perspective, so maybe most Americans would disagree with me, but I think W. was even worse than Nixon. I'd venture W. might be the worst president in the post-War era, and I only say that because the pre-War era was a sufficiently different time, and also because I don't know the history well enough, to feel comfortable making predictions which go before that. I don't think it would be too hard for someone who knew their American history better to convince me W. was the worst president of the last 100 years, and one of the worst of all time.

Sure, there are plenty of Americans who feel every politician and Obama as well are wiping their butts with the Constitution, but there is still enough of a coherent core left of centre that the Democratic Party hasn't been imploding for the last 8 years, and isn't imploding right now.

Thomas Jefferson once wrote: "The tree of liberty must be refreshed from time to time with the blood of patriots and tyrants." As a modernist who has observed the quality of life in the developed world achieve wonders we're all lucky to have, I'm much opposed to bloody revolutions which would threaten all that is stable in democracies with elections that still work. However, I am glad Donald Trump is causing a crisis and a revolution in the Republican Party that is causing the spirit of the old guard to stare into the face of its doom. To cleanse the establishment so seems important enough sometimes I think it's worth the trade of putting Trump in the White House if the Republican Party dies. Barring authoritarian nightmare scenarios, Trump will only be the crazy President of the United States for four years until he's booted out in 2020, and then the Republican Party will be dead. As long as Trump can't succeed in violating human rights left, right and centre, his winning the Republican nomination for the Presidency might be good insofar as this will destroy the Republican Party and it will prevent a dangerous fundamentalist like W. from returning to the White House for a very long time as well.

Thursday, 3 March 2016

On 'rationalist', 'effective altruist', and Labels-As-Personal-Identities

In a conversation about other things, Julia Galef of the Center For Applied Rationality mentioned as an aside she doesn't identify as an 'EA'. Of course, this is presumed by all to be shorthand for 'effective altruist'. I've been reading for a while these undercurrents that the identity of 'effective altruist', as a noun, as a personal label anyone can freely choose for themselves rather than a category they may or may not match, waters down what 'EA' really means, or could mean, and is becoming problematic. Of course, there are those who would say building this into some sort of ingroup-y, tribe-like identity has always been a problem, perhaps since its conception. It's a lot of the same problem many express with just about anyone identifying with the term 'rationalist', and that profession being accepted as long as that person can send the right signals, i.e., a surface-level understanding of the relevant memes, to the rest of the self-identified 'rationalist' community.

I know Rob Bensinger has for a long time expressed a preference for people referring to themselves as 'aspiring rationalists', or 'aspiring effective altruists'. I think this won't work, as that's so long a phrase as to strike most as unwieldy in casual parlance. At best, I think people will shrug, assume others will know the 'aspiring' is implied and implicitly tacked onto the use of the terms 'EA', 'effective altruists', and/or 'rationalists'. Of course, others *won't* or *don't* assume that, and then somewhere in each of our minds we assume we're 'bona fide' EAs, or rationalists, that we're the real deal, whatever that is supposed to mean. I think this has already happened. I don't perceive a way to enforce this norm of thinking of ourselves, not only as individuals, but as a collective, as constantly aspiring to these ideals as an asymptote to be approached but never achieved, us being the limited and epistemically humble humans we are, unless someone like Will MacAskill and/or Eliezer Yudkowsky wrote repeated implied injunctions against anyone referring to themselves as anything but 'aspiring', when relevant.

Ryan Carey once expressed a sentiment that 'effective altruist' is something, as the movement grows, will become a shorthand for those who are involved with the global community, the 'intellectual and social movement', that call itself "effective altruism". He predicted the term 'effective altruist' will come to refer to people who identify with it, without being especially problematic. This will happen in the same way 'Democrat' or 'Republicans' refers to Americans who identify with particular political parties, without anyone assuming someone affiliated with one party or the other being for democracy and against republics, or visa-versa. I rule this prediction is correct, and has already come to pass. I thus recommend people stop making as big a deal about how the term 'effective altruist' is used. I'm not as enmeshed with the rationality community, but for policy on what to think of and how to use the label and personal identity of 'rationalist', I defer to Scott Alexander's essay 'Why I Am Not Rene Descartes'.

Most people probably haven't noticed this, but I have stopped tending to use the term 'effective altruist'. Sometimes, in discourse when everyone else is using it, I feel forced to use the shorthand 'EA', or 'EAs'. It's just convenient and I don't want to break my or anyone else's flow. However, I mentally think of this as meaning a 'community member'. That is, an 'EA' is, to me, someone who has chosen to in some capacity be involved with the body of other people known as 'effective altruism'. The extent to which one is an 'EA' is the extent to which one involves themselves in this community. A 'hardcore EA' is someone who has perhaps made their involvement in effective altruism their primary community, as opposed to some other social category, subculture, social movement, etc. The abbreviation composed of two letters, 'EA', implies this, without implying one is necessarily someone is particularly effective, altruistic, or effectively altruistic. Some people who identify as EAs will not be particularly effective or as altruistic as they ought to be, and some who explicitly eschew the label will match in their actions the ideals of effective altruism better than most. In this sense I pragmatically accept 'EA'-as-community-member, while banishing from my personal lexicon thinking of some people as truly or real 'effective altruists', and others not being so. There are just individual humans who take actions, and ever bigger groups of them, be them subcultures, companies, or nations, who coordinate some of their actions or beliefs towards common goals. When there is greater coordination among greater number of peoples, it's to varying extents useful and efficient to think of them as a unified, cohesive, atomic agent.


I know this puts me in a position which may strike most as odd. I'm just putting my thoughts down for reference. However, I hope people will remove from the marker in their brain labelled 'EA' or 'effective altruist' that there's a strong correlation or implication that anyone who uses that term to refer to themselves as automatically way more effective, more altruistic, more rational, or otherwise better than anyone else they meet in life. There may be a weak correlation there, but to the extent you interact with individuals, EA, rationalist, or otherwise, get to know them first.

Evan's Dictionary

Sometimes I make up words. I think making up words is a valid strategy, especially for writers discussing relatively novel ideas, or novel perspectives on old ideas, if the majority of humans who would end up reading or hearing them would agree the meaning assigned to the word intuitively works. Thus, here are the words which, while I don't claim exclusive title to adding them to the English lexicon, seem rarely used before I thought of them myself. Additions to this dictionary are ongoing for the indefinite future. Suggested additions from friends of useful words they've made up and/or think I should also use, especially as the Evantionary see below) gains clout, are welcome and encouraged.

Dankularity: noun. A memetic dankularity is a joke and possibly a real prediction I've posited of some given subculture becoming primarily dominated by dank memes and other dankness-related phenomena as opposed to the domination of all other factors combined. 'Dankularity' as a word and idea is inspired by the idea of a technological singularity, as opposed to 'singularity' as its used in physics or other scientific disciplines.

Endarkenment: noun. The opposite of 'enlightenment'. Verb form: (to) endarken. Adj/Adv. form: endarken

Endodus: noun. The opposite of 'exodus'.

Evantionary: proper noun. A term I myself (Evan) and others may use to refer to 'Evan's Dictionary' (this here document). A portmanteau of 'Evan's' and 'Dictionary'.