Comment author: John_Maxwell_IV 25 February 2018 12:57:22AM *  5 points [-]

Thanks for this post. Some scattered thoughts:

The main risk for AIA seems to be that the technical research done to better understand how to build an aligned AI will increase AI capabilities generally, meaning it’s also easier for humanity to produce an unaligned AI.

This doesn't seem like a big consideration to me. Even if unfriendly AI comes sooner by an entire decade, this matters little on a cosmic timescale. An argument I find more compelling: If we plot the expected utility of an AGI as a function of the amount of effort put into aligning it, there might be a "valley of bad alignment" that is worse than no attempt at alignment at all. (A paperclip maximizer will quickly kill us and not generate much long-term suffering, whereas an AI that understands the importance of human survival but doesn't understand any other values will imprison us for all eternity. Something like that.)

I'd like to know more about why people think that our moral circles have expanded. I suspect activism plays a smaller role than you think. Steven Pinker talks about possible reasons for declining violence in his book The Better Angels of Our Nature. I'm guessing this is highly related to moral circle expansion.

One theory I haven't seen elsewhere is that self-interest plays a big role in moral circle expansion. Consider the example of slavery. The BBC writes:

It becomes clear that humanitarianism and imperial muscling were able bedfellows...

One can be certain that the high ideals of abolition and the promotion of legitimate trade were equally matched by economic and territorial ambitions, impulses which brought forward partition and colonial rule in Africa in the late 19th century.

You'll note that the villains of the slave story are the slavers--people with an interest in slavery. The heroes seem to have been Britons who would not lose much if slavery was outlawed. Similarly, I think I remember reading that poor northern whites were motivated to fight in the US Civil War because they were worried their labor would be displaced by slave labor.

According to this story, the expanding circle is a side effect of the world growing wealthier. As lower levels of Maslow's hierarchy are met, people care more about humanitarian issues. (I'm assuming that genetic relatedness predicts where on the hierarchy another being falls.) Conquest is less common now because it's more profitable to control a multinational company than control lots of territory. Slavery is less common because unskilled laborers are less of an asset & more of a liability, and it's hard to coerce skilled labor. Violence has declined because sub-replacement fertility means we're no longer in a zero-sum competition for resources. (Note that the bloodiest war in recent memory happened in the Democratic Republic of Congo, a country where women average six children each--source. Congo has a lot of mineral wealth, which seems to incentivize conflict.)

I suppose a quick test for the Maslow's hierarchy story is to check whether wealthy people are more likely to be vegan.

I don't think everyone is completely self-interested all the time, but I think people are self-interested enough that it makes sense for activists to apply leverage strategically.

Re: a computer program used to mine asteroids, I'd expect certain AI alignment work to be useful here. If we understand AI algorithms more deeply, an asteroid miner can be simpler and less likely sentient. Contrast with the scenario where AI progress is slow, brain emulations come before AGI, and the asteroid miner is piloted using an emulation of someone's brain.

I'm not comfortable relying on innate human goodness to deal with moral dilemmas. I'd rather eliminate incentives for immoral behavior. In the presence of bad incentives, I worry about activism backfiring as people come up with rationalizations for their immoral behavior. See e.g. biblical justifications for slavery in the antebellum south.

Another moral circle expansion story involves improved hygiene. See also.

Singer and Pinker talk a lot about the importance of reason and empathy to the expanding moral circle. This might be achieved through better online discussion platforms, widespread adoption of meditation, etc.

Anyway, I think that if we take a broad view of moral circle expansion, the best way to achieve it might be some unexpected thing: improving the happiness of voters who control nuclear weapons, helping workers deal with technological job displacement, and so on. IMO, more EAs should work on world peace.

Comment author: Mirco_Vogelgesang 11 February 2018 12:28:26PM *  0 points [-]

Geoengineering and thermodynamics are outside my field of expertise, so I am not really qualified to make a judgement about this Chimney concept - to me it seems questionable whether such a system could actually facilitate that kind of energetic heat exchange without the air reaching a state of equilibrium inside.

Yes, renewables are a lot more competitive now, but the transition towards them remains too slow to feather off peak-oil. In addition, they can´t compete in every sector (such as transportation, which contributes considerably to both fossil fuel consumption and CO2 emissions).

Anyhow, hoarding oil sounds like an interesting way to drive up its price and hence create economic incentive to speed up this transition but at the same time demands a very close look at how the global energy system works. It is designed to cover demand with supply very closely, hence there is little infrastructure for long-time storage and reserves. Consequently, it would need to be created. It also means getting into the social systems and control regimes of the global energy sector. It seems like an interesting idea which will demand quite a bit of research to assess its feasibility but is designed for the scope of civil-society and pre-existing neoliberal order and hence not set too high of a hurdle to get behind.

Also, advocacy to increase carbon emission taxes and oil tax may be cause areas here.

Comment author: John_Maxwell_IV 12 February 2018 06:44:12AM 0 points [-]

For what it's worth, I asked my brother (who has a physics degree) about the Superchimney site and he said he didn't think the analysis was that great.

Comment author: John_Maxwell_IV 10 February 2018 07:34:12AM *  1 point [-]

In blackjack, a competent player wins against the house ~49.5% of the time, and the house wins ~50.5% of the time. If I was to record a string of 0s and 1s, where a 1 represents a win by a competent player and a 0 represents a win for the house, my string would look almost exactly like noise. If I sit and record 20 games, that's 2^20 = 1048576 possible strings I could record. So you might naively think that there's no opportunity for useful predictions here. But in fact, the house edge means that on expectation, the house is going to win money and a competent player (absent card counting) is going to lose it.

In the same way, it's not the stochasticity of the system that matters so much as whether we can make forecasts. Blackjack has loads of stochasticity, but the ultimate financial outcome can still be usefully forecasted. Weather is also very stochastic and may exhibit chaotic properties (see butterfly example), but weather forecasts are still pretty useful. Etc. The issue for EA is that we are trying to make forecasts in domains where there isn't necessarily a history of successful forecasting like there is for the weather. This is a hard problem to deal with, but I don't think it's completely intractable. I suspect the set of skills needed is similar to the ones you need to be a successful investor or run a successful hedge fund.

Comment author: Arepo 01 February 2018 12:39:46AM 6 points [-]

I also feel that, perhaps not now but if they grow much more, it would be worth sharing the responsibility among more than just one person per fund. They don't have to disagree vociferously on many subjects, just provide a basic sanity check on controversial decisions (and spreading the work might speed things up if research time is a limiting factor)

Comment author: John_Maxwell_IV 10 February 2018 07:16:21AM *  2 points [-]

This seems like a good point. OpenPhil has previously drawn analogies between the work it does and the work venture capitalists/angel investors do. One big part of the job of an angel investor is to spend lots of time networking so as to become aware of new funding opportunities. The fact that some fund managers are apparently not even socially engaged enough to explain why they aren't granting the money they've been given seems a little discouraging on this front.

This view also suggests that a good person to add to the EA Funds team might be someone who is already known as a super-networker within the EA community. (Somewhat disappointingly, I'm having a hard time thinking of anyone like this off the top of my head. Proposal: A few people should make it their business to go to every EA event they can possibly go to, monitor and contribute to all online EA discussion spaces, and get to know loads of people in order to introduce people who should know each other etc.)

Comment author: Michelle_Hutchinson 05 February 2018 02:48:52PM 2 points [-]

[Note: It is difficult to compare the cost effectiveness of developed country anti-smoking MMCs and developing country anti-smoking MMCs because the systematic review cited above did not uncover any studies based on a developing country anti-smoking MMC. The one developing country study that it found was for a hypothetical anti-smoking MMC. That study, Higashi et al. 2011, estimated that an anti-smoking MMC in Vietnam would result in one DLYG (discount rate = 3%) for every 78,300 VND (about 4 USD). Additionally, the Giving What We Can report that shows tobacco control in developing countries being highly cost effective is based on the cost-effectiveness of tobacco taxes, not the cost-effectiveness of anti-smoking MMCs, and the estimated cost-effectiveness of tobacco taxes is based on the cost to the government, not the cost to the organization lobbying for a tobacco tax.]

This report briefly discusses MMCs as well as tax increases. It mentions MMCs are likely to be much more effective than those in the UK, due to the comparatively far lower awareness of the harms of smoking in developing countries, and far higher incidences in smoking. I wonder if we could learn more about the potential efficacy of such campaigns by comparing them to campaigns to try to lower road traffic injury? My impression is that in the latter case there has been a bit more study done specifically in developing world contexts.

Comment author: John_Maxwell_IV 10 February 2018 06:59:01AM *  0 points [-]

Yeah, my hunch is that in developed countries, it's higher-leverage to help people quit than spread awareness of smoking harms. For example, there's a supplement called NAC that might help. (I assume that no large trials have been done because no pharmaceutical company can patent it.) Making e-cigarettes easier to get could also be a good idea.

Comment author: John_Maxwell_IV 10 February 2018 06:52:09AM *  1 point [-]

Re: desertification, do you have thoughts on ? (Discussion)

Re: peak oil, my understanding is that sustainable sources of energy are now price-competitive with fossil fuels.

If you are concerned with peak oil, the solution is simple: buy & hoard oil now. This accomplishes a few things. First, it increases the price of oil near-term, which creates a financial incentive to move our infrastructure off oil. Second, if you like, you can personally prevent anyone from burning the oil (since you own it) and it won't release any carbon into the atmosphere. Third, if you choose, you can sell the oil later on (after oil prices have risen) in order to smooth the transition to a post-oil society. Fourth, if your projections are accurate, you will make a tidy profit doing this (which can then be applied to EA causes). To add leverage to this strategy, convince rich speculators that they will make money by buying & hoarding oil.

Comment author: John_Maxwell_IV 30 November 2017 07:28:36AM 0 points [-]

The fragmentation that I perceive in the space may be more by design than is apparent to me. In particular, the US defense department has a strategy for North Korea that may be aligned only with some nonprofits’ goals, and it has ways of encouraging the activism it wants to see and discourage the activism that it’s not interested in. I already mentioned that nonprofits have refused funding from US government sources so not to make themselves dependent on a funder that may not be value aligned with them and does not make exit grants.

Can you talk more about why the DoD might not be value aligned? Perhaps the DoD wants to minimize the probability of a nuclear war, whereas humanitarian activists want to alleviate present suffering, and in some cases these goals trade off?

Comment author: Milan_Griffes 29 November 2017 04:33:30AM 3 points [-]

The study of North Korea may produce insight into how dystopian societal attractor points can be averted or what preventive measures (beyond what is present in today’s North Korea) might help people on the inside destabilize them.

This is a great point.

Comment author: John_Maxwell_IV 30 November 2017 06:06:08AM *  1 point [-]

[Highly speculative]

Maybe there's an unpopularity/coercion downward spiral: the more unpopular a leader becomes, the more the leader needs to rely on coercion in order to stay in power, causing further unpopularity, etc.

Having a source of legitimacy, even if it's completely arbitrary (the "divine right of kings"), helps forestall the spiral, because the leader doesn't need coercion to stay in power during periods of unpopularity.

According to this story, the reason communism doesn't end well is because it's an ultra-egalitarian ideology that holds status differences to be illegitimate and revolution to be virtuous. So the only rulers able to stay in power do so through coercion. (See: "dominance" vs "prestige" in social science.)

A surprising implication of this view: the existence of democratically ruled countries makes authoritarian countries less nice to live in. Because democracies make autocrats look less legitimate, autocrats need to rely more on coercion to maintain power. This argument also work in reverse: If Putin makes US democracy look less legitimate, anti-Putin coalitions in Russia have a harder time gaining steam, so Putin doesn't need to crack down as hard on them.

Chinese leaders want a diplomatic solution to the crisis because deposing Kim makes them look less legitimate.

People instinctively want to take a hard line on Kim, but a soft line is an interesting thought experiment. Suppose the US offered Kim $100M to step down. Kim won't take it, because he knows the US can imprison him as soon as he takes his finger off the nuclear button. And there's no way the US can credibly precommit to not do this. Well, actually, I can think of a way to get the same effect: Let Kim keep Barron Trump and Sasha Obama as hostages. Not politically viable, and creates bad incentives for other autocrats, but fun to think about.

In response to What consequences?
Comment author: Denkenberger 25 November 2017 09:21:42PM *  1 point [-]

I'd be curious how much you think previous attempts at calculating multiple impacts address cluelessness, such as Causal Networks Model, saving lives in the present generation and reducing X risk for AI and alternate foods, and cause area comparison.

Comment author: John_Maxwell_IV 26 November 2017 08:49:35AM 1 point [-]
Comment author: DavidMoss 13 October 2017 01:17:08AM 6 points [-]

Instead, participants strongly preferred to continue researching the area they already knew and cared most about, even as other participants were doing the same thing with a different area.

This is one of the things I fear is most likely to fundamentally undermine EA in the long term: people prefer to discuss and associate with people who share their assumptions, concrete concerns and detailed cause-specific knowledge and EA functionally splits into 3+ movement areas who never speak with each other and don't understand each other's arguments, and cause neutrality essentially stops being a thing. Notably, I think this has already happened to a significant extent.

Comment author: John_Maxwell_IV 17 October 2017 07:23:22AM 1 point [-]

Could public debates be helpful for this?

View more: Next