Comment author: jackva 02 April 2018 05:52:44PM 1 point [-]

Wouldn't you need to look at the price elasticity of egg consumption rather than absolute trends to conclude whether / to which degree the reduced demand by some is moderated by replacement effects?

It also seems that, at least in the medium term, lower prices cannot necessarily support the same scale of egg production so the market would, hopefully, shrink, no?

Comment author: HaukeHillebrandt 03 April 2018 09:31:14AM 0 points [-]

Yes, absolutely, but the elasticity is somewhat hard to calculate (somewhat should try this though!). My example from above is just making a conservative assumption that the replacement effect is extreme. Of course it could be that there would have been a 37 million hen increase independent of corporate campaigns and that corporate campaigns have moved 25.8 million of those hens out of cages.

Comment author: HaukeHillebrandt 02 April 2018 11:23:19AM *  4 points [-]

This is very interesting thanks!

These projections of cost-effectiveness seem promising. I have a nagging related worry about what these campaigns have achieved so far, both in order to estimate a lower bound of their effectiveness, but which might also be relevant for future effectiveness. This worry resulted from the hypothesis that there is a displacement effect so that consumers and companies who buy cage free, will lower the price of caged eggs and thus increase demand from other consumers and retailers (in the US and potentially abroad).

Looking very briefly at the data it seems that the number of US cage free hens seem to have gone up in absolute terms by 25.8 million between Jan 2016-Oct 2017. However, it seems that total layer hens in the very similar time period from Dec '15 to Dec '17 have gone up by 37 million ( spreadsheet with sources ). In other words, the absolute number of caged hens seems to be increasing and corporate campaigns might have not had any effect at all so far. This seems to be in line with industry news.

This is also worrying especially if processed eggs from caged eggs might be exported to other countries in the future if the prices for eggs are further pushed down, or if processed eggs from caged hens are imported into the US.

But I'm not an export on this topic, so I would really like to hear someone to tell me what's wrong with this argument.

Comment author: zdgroff 23 February 2018 07:00:34PM 1 point [-]

This is really fascinating. I think this is largely right and an interesting intellectual puzzle on top of it. Two comments:

1) I would think mission hedging is not as well suited to AI safety as it is to climate or animal activism because AI safety is not directly focused on opposing an industry. As has been noted elsewhere on this forum, AI safety advocates are not focused on slowing down AI development and in many cases tend to think it might be helpful, in which case mission hedging is counterproductive. I could also imagine a scenario in which AI problems also weigh down a company's stock. Maybe a big scandal occurs around AI that foreshadows future problems with AGI and also embarrasses AI developers.

2) As kbog notes, it doesn't seem clear that the growth in an industry one opposes means the marginal dollar is more effective. Even though an industry's growth increases the scale of a problem, it might lower its tractability or neglectedness by a greater amount.

Comment author: HaukeHillebrandt 23 February 2018 08:28:43PM 0 points [-]

Excellent comments- thanks!

AI safety advocates are not focused on slowing down AI development and in many cases tend to think it might be helpful, in which case mission hedging is counterproductive.

I know people working on AI safety who would want to slow down progress in AI if it would be tractable. I actually think that it might be possible to slow down AI by reducing taxes on labor and increasing migration - see https://www.cgdev.org/blog/why-are-geniuses-destroying-jobs-uganda - which I think is a better idea than robot taxes: https://qz.com/911968/bill-gates-the-robot-that-takes-your-job-should-pay-taxes/ . Somebody should write about this.

But this not really about speed: mission hedging might work in this case because the stock price of an AI company merely reflects the probability of whether a company will come up with better artificial intelligence than the competition earlier, not when.

I could also imagine a scenario in which AI problems also weigh down a company's stock. Maybe a big scandal occurs around AI that foreshadows future problems with AGI and also embarrasses AI developers.

Note that it is important to diversify within mission hedging. So weighing down one company's stock doesn't matter. I feel that any scandals that are not really related to the actual ability of the AI industry to produce better AI faster will likely have very limited effect on the stock price dropping. I'm reminded here of fatalities with self-driving cars, which has not rocked investors confidence in investing in them. But even if it does, than that just means that self-driving cars are not as great as we thought they would be (presumably some fatalities are already 'priced in').

But yes, your point is valid in the that 'you can't short the apocalypse', as I mention above. Overall, I actually think, all things considered, mission hedging might work best for AI risk scenarios.

Comment author: kbog  (EA Profile) 23 February 2018 12:49:51PM *  2 points [-]

Even though medical device sales and the stock prices of corporations selling these devices should often covary, they are merely correlated. One can imagine cases where medical devices sell poorly, and yet global health is poor, or cases where medical devices sell very well, but global health is good. This is why it’s better to invest in corporations that directly cause the bad activity, in this case tobacco.

I'm not sure about this section. You just say that the covariance isn't perfect, therefore we must directly invest in the relevant industry. Sure, the imperfect covariance is a reason why we should expect it to be better to invest in the relevant industry, but that doesn't mean that hedging in covariant industries is not good at all. You're talking about investments in the relevant industry as if they are a necessary condition for hedging to make sense, when in reality you just give a presumption that it's better than doing it in other ways. There is usually a chance that your investments will fail when the rest of the industry does well anyway, even if you invest directly in the target sector. And investing in a separated, covariant industry has a major benefit in that, not only is it not a reputational risk, but it isn't a directly harmful activity if the EMH is false.

Also, there is another necessary condition which is that the marginal value of donations must increase when the problem gets worse. Companies hedge because they have a greater need for money when their stocks fail. They don't really maximize expected profits, they are somewhat risk averse. Now do our donations go further when the problems in the world get worse? I'm inclined to say "yes", but I think it's a very small effect. I wrote about this and tested some estimate numbers with a very rudimentary calculation, and it seemed to me that the benefit was arguably too small to worry about, and it doesn't seem sufficient to outweigh the risk of robustly improving the performance of bad industries in the strategy you outline here.

http://effective-altruism.com/ea/16u/selecting_investments_based_on_covariance_with/

https://imgur.com/9of14il

Also, I think that generalizing to selecting estimates based on covariance with charity value is the right framework to use here, instead of just looking at this sort of hedging.

Comment author: HaukeHillebrandt 23 February 2018 08:02:08PM 0 points [-]

These are excellent comments, thank you!

Regarding your first point on investing in industries that covary vs. are causally related: you're right that mission hedging can also work when there is just covariance. I think the main benefit of investing in companies that cause the bad activity is that it will have have a tighter covariance than investing in companies that do not cause the bad activity and we can know this ex ante. I do take your point that this is potentially more of a reputational risk in investing in companies that cause the bad activity (for some cases, for some people). I do not think the reputational risk argument applies much to either small investors or some investments such as investing in technology companies to hedge against AI risks. Now, your last point I find most interesting: if the efficient market hypothesis (EHM) doesn't hold then it's better to invest in things that have a high covariance. I have a strong intuition that EHM holds for publically traded stocks, especially for small investors, who don't make a big fuzz about investing. Overall, I feel drawn to selecting investments that cause the bad activity due to higher certainty about high future covariance.

Now do our donations go further when the problems in the world get worse? I'm inclined to say "yes", but I think it's a very small effect.

Yes, this crucially depends on whether there are increasing returns to scale to charitable intervention, which is another assumption. However, for me the assumption has has intuitive appeal. I can imagine the effect size to be substantial in some cases (I now give a toy model in the beginning of the text). Think about the effect of public good type interventions where the cost-effectiveness scales pretty linearly with the problem (how many beings are affected).

I took a look at your calculation and I'm sorry to say that I don't quite understand it. However, based on the numbers that I see, I think that plugging in different parameters into the model would also not be entirely unreasonable. But yes, I agree think it might be interesting to have more empirical validation on this.

I think our disagreement might boil down to different intuitions about whether EMH holds on the stock market and whether there returns to scale i.e. whether a charity becomes more effective as the problem gets bigger. I think this is somewhat likely in some cases (but I'm not completely confident in this). So I'm still pretty convinced about this to the point where I would advice people to seriously, though carefully consider using mission hedging over your covariance approach.

Also, I think that generalizing to selecting estimates based on covariance with charity value is the right framework to use here, instead of just looking at this sort of hedging.

I think investing in corporations that cause the bad activity is theoretically equivalent to this and in fact is based on finding a (distal) cause of charity effectiveness. However, as mentioned above it assumes increasing returns to scale.

But I just thought about finding a more proximal cause of charity effectiveness, that can still be directly implemented on the stock market and maybe this might be shorting the endowment of your favorite charity. Will Macaskill made a similar comment on your post saying that maybe it might be worth considering shorting FB if OpenPhil is still heavily reliant on it. Maybe your favourite charity has an endowment and it itself doesn't hedge against risks (because their portfolio is not optimally diversified).

Comment author: nobody 19 February 2018 04:57:33PM *  -2 points [-]

I replied about this before to one of your posts. Maybe I did not explain it well. In short, two guys wrote a paper about how combinations of heat and humidity above certain levels could kill everyone who lacks access to air conditioning in large regions of the world, or at least force them to evacuate their countries. Do you have any opinion on the priority level of understanding this compared with other climate causes?

Comment author: HaukeHillebrandt 20 February 2018 12:48:10PM 2 points [-]

Sorry, I missed your previous comment. I'm not an expert on climate change and this not necessarily the best place for this discussion of why this is neglected within effective altruism - I would recommend that you post your question to Effective Altruism Hangout facebook group and ask for an answer. The reason that you get downvoted is that you post on many different threads even though it's not really related to the discussion. I would recommend you reading this: before posting though: https://80000hours.org/2016/05/how-can-we-buy-more-insurance-against-extreme-climate-change/

However, here are my two cents: - everybody here agrees that climate change is an important problem - the 'wet bulb' phenomenon is known and mortality from heatstrokes is included in most assessments of overall cost of climate change. see https://www.givingwhatwecan.org/cause/climate-change/ https://www.givingwhatwecan.org/report/climate-change-2/ https://www.givingwhatwecan.org/report/modelling-climate-change-cost-effectiveness/

  • most scientists agree that the most likely outcome is not that the whole planet will be pretty much uninhabitable. However, there is a chance that this will be true and extreme risks from climate change is a topic that many people in the EA community care about (see:(https://80000hours.org/problem-profiles/ ))
  • you don't propose a particular intervention, but rather highlight a particular bad effect from climate change. There's more active discussion on what is the best thing we can do about climate change rather than listing the various effects (https://www.givingwhatwecan.org/report/ccl/ https://www.givingwhatwecan.org/report/cool-earth/
  • in effective altruism, we also look at 'neglectedness'. Many people work on climate change, fewer care about risks from emerging technology ((https://80000hours.org/problem-profiles/ )), this is why climate change is not more of a priority area.
Comment author: MichaelPlant 19 February 2018 03:07:23PM 1 point [-]

Okay, but can you explain why it would beat maximise expected returns?

Here's the thought: maximise expected returns gives me more money than mission hedging. That extra money is a pro tanto reason to think the former is better.

However, mission hedging seems to have advantages, such as in shareholder activism: if evil company X makes money, I will have more cash to undermine it, and other shareholders will know this, thus suppressing X's value. This is a pro tanto reason to favour mission hedging.

How should I think about weighing these pro tanto reasons against one another to establish the best strategy? Apologies if I've missed something here, thinking this way is new to me.

Comment author: HaukeHillebrandt 20 February 2018 12:08:32PM 0 points [-]

Thanks for asking for clarification - I'm sorry I think I've been unclear about the mechanism. It's not really about shareholder activism, this is just an extra.

I've now added a few graphs and a spreadsheet as a toy model of why mission hedging beats a strategy that maximizes financial returns in the introduction. Can you take a look and see whether it's more clear now? Or maybe I'm missing your question.

Comment author: MichaelPlant 18 February 2018 10:31:01PM 3 points [-]

I thought this was super interesting, thanks Hauke. The question that sprang to mind: in what circumstances would it do more good to engage in mission hedging vs trying to maximise expected returns?

Comment author: HaukeHillebrandt 19 February 2018 10:54:33AM *  1 point [-]

Great question!

In theory, mission hedging can always beat maximizing expected returns in terms of maximizing expected utility.

In practice, I think the main considerations here are a) whether you can find a suitable hedge in practice and b) whether you are sufficiently certain that a cause is important, because you give up the flexibility of being cause neutral and tie yourself financially to a particular cause. You can remain cause neutral by trying to maximize expected financial returns.

To me, the two most promising applications seem to be AI safety, where people are often quite certain that it is one of the most pressing causes (as per maxipok or preventing s-risk), and it seems as if investing in AI companies is plausible to me (but note Kit Harris objections in the comment section here). And then also using mission hedging for ones career might be good by either joining the military, the secret service, or an AI company for the reasons outlined above i.e. historically people in the military have sometimes had outsized impact.

10

A generalized strategy of ‘mission hedging’: investing in 'evil' to do more good

A generalized strategy of ‘mission hedging’: investing in 'evil' to do more good   Hauke Hillebrandt  [1] Version from: 18/02/2018 Disclaimer By reading this material, you acknowledge, understand and accept the following: This material has been prepared by Hauke Hillebrandt (“Hauke Hillebrandt”). This material is subject to change without notice.... Read More
Comment author: DanielHendrycks 10 February 2018 11:41:29PM *  0 points [-]

Try to get Kyunghyun Cho to do work on AI safety research.

I spoke with Kyunghyun Cho a year ago, and he was extremely dismissive of safety. I have no idea why you listed him.

Comment author: HaukeHillebrandt 12 February 2018 12:49:57PM *  0 points [-]

Excellent question - and apologies I should have been more clear. I've listed him because he is of course one of the top computer scientists in deep learning. Also note that I did caveat that "I don’t have a strong sense if each and every one of these items should really be funded, because I have not vetted them thoroughly, but I hope that they might serve as an inspiration for further research". The idea of this item being that it might be good to just try to convince (and incentivise through funding) one of the top computer scientists in ML to work on AI safety. But I agree maybe there are more people like him that might be better suited. Perhaps you have someone better in mind?

Also, note that many people start out being dismissive of safety, and Cho has been retweeting Miles Brundage quite often recently, so maybe he could be convinced to work on this, especially if given funding to work on e.g. 'concrete problems in AI safety'. So I wouldn't rule him out based on anecdotal evidence.

Comment author: Denkenberger 30 January 2018 10:14:41PM 1 point [-]

Climate change: Fund the authors of this paper on the $10 trillion value of better information about the transient climate response. More on Value of information.

Interesting paper-the reason information is so valuable is because they are talking about spending ~$100 trillion on emissions reductions. Since we are only talking about spending around a few billion dollars on AI or $100 million on mitigation strategies for nuclear war, and because these risks are significantly bigger than climate change, it shows you how much lower a priority climate change is. Solar radiation management (a type of geo-engineering), which you refer to, can be much cheaper, but it still cannot compete (and it potentially poses its own risks).

Comment author: HaukeHillebrandt 31 January 2018 12:56:10PM 0 points [-]

I take your point that because on the cause level AI safety is somewhat more neglected, it scores better on the ITN framework (I actually think all of military spending is kind tangled up in the nuclear security scale / tractability, and so maybe it would actually score worse than climate change).

In any case, I think given that this research has a net present value of $10 trillion and it would also liberate funding and talent to go to other causes it is still worth considering and might on the margin be better than a mediocre AI safety grant.

Also, note that I have written this list explicitly so that there is some flexibility in what one can pitch to different donors, who might care particularly about climate change as a cause. Within climate change, I believe this might be a particularly good research area to fund, even before geoengineering projects.

View more: Next