M

MHarris

232 karmaJoined Oct 2017

Posts
2

Sorted by New
3
· 2y ago · 1m read
104
· 3y ago · 1m read

Comments
19

I think, in general, personal consumption decisions should be thought in the context of moral seriousness (see Will MacAskill's comments in recent podcast).

Should we take seriously efforts to avoid unnecessary emissions? Yes! Is EA doing this? I'm not sure. My impression is that EAs are fairly likely to avoid unnecessary flights, take public transport etc - that's the attitude I take myself, anyway. This is less unusual than veganism - the thoughtful Londoners I'm surrounded by do the same. So I think it would be easy to underestimate the extent to which EAs do this, just because it's less noteworthy.

EAs also fly to conferences which have air conditioning. Is this worth it? Anecdotally, a lot of good seems to emerge from in-person conferences. And air-conditioning is important for thinking and learning. So I think we're probably in the right place here, but I'd be interested in a more detailed look at this question.

Should EAs reduce their emphasis on personal meat/dairy/egg consumption? Should they increase their emphasis on their personal carbon footprint? 

I think the answer is probably a bit of both.

I strongly doubt there is truly a trade-off here - I don't think veganism is an especially emphasised aspect of EA, and if there is a strong case for specific changes in personal emissions consumption, I think this could be advocated on its own merits and in addition to veganism.

Thanks for sharing your talk.

I'm at the UK's Competition and Markets Authority. Very happy to talk to anyone about the intersection of competition policy and AI.

How much did the $13 million shift the odds? That's the key question. The conventional political science on this is skeptical that donations have much of an effect on outcomes (albeit it's a bit more positive about lower profile candidates like Carrick) https://fivethirtyeight.com/features/money-and-elections-a-complicated-love-story/

(In this case, given the crypto backlash, it's surely possible SBF's donations hurt Carrick's election chances. I don't want to suggest this was actually the case, just noting that the confidence interval should include the possibility of a negative effect, here.)

Signaling is a more interesting idea, but raises more questions about effectiveness. How much is it worth spending to get someone elected on the basis that they've endorsed pandemic prevention for self-interested reasons?

I'm certain EA would welcome you, whether you think AI is an important x-risk or not.

If you do continue wrestling with these issues, I think you're actually extremely well placed to add a huge amount of value as someone who is (i) ML expert, (ii) friendly/sympathetic to EA, (iii) doubtful/unconvinced of AI risk. It gives you an unusual perspective which could be useful for questioning assumptions.

From reading this post, I think you're temperamentally uncomfortable with uncertainty, and prefer very well defined problems. I suspect that explains why you feel your reaction is different to others'.

"But I find it really difficult to think somewhere between concrete day-to-day AI work and futuristic scenarios. I have no idea how others know what assumptions hold and what don’t." - this is the key part, I think.

"I feel like it would be useful to write down limitations/upper bounds on what AI systems are able to do if they are not superintelligent and don’t for example have the ability to simulate all of physics (maybe someone has done this already, I don’t know)" - I think it would be useful and interesting to explore this. Even if someone else has done this, I'd be interested in your perspective.

Excession, Surface Detail and The Hydrogen Sonata are the three I'd recommend from a longtermist perspective.

Consider Phlebas is (by some margin) the worst novel in the series. It's a shame it seems like the obvious place to start.

On this theme, I was struck by the 80,000 hours podcast with Tom Moynihan, which discussed the widespread past belief in the 'principle of plenitude': "Whatever can happen will happen", with the implication that the current period can't be special. In a broad sense (given humanity's/earth's position), all such beliefs were wrong. But it struck me that several of the earliest believers in plenitude were especially wrong - just think about how influential Plato and Aristotle have been!

I wonder if there would be a strong difference between "What do you think of a group/concept called 'effective altruism'", "Would you join a group called 'effective altruism'", "What would you think of someone who calls themselves an 'effective altruist'", "Would you call yourself an 'effective altruist'".

I wonder which of these questions is most important in selecting a name.

I don't mind rhetorical descriptions of China as having 'less economic and political freedom than the United States', in a very general discussion. But if you're going to make any sort of proposal like 'there should be more political freedom!' I would feel the need to ask many follow-up clarifying questions (freedom to do what? freedom from what consequences? freedom for whom?) to know whether I agreed with you.

Well-being is vague too, I agree, but it's a more necessary term than freedom (from my philosophical perspective, and I think most others).

This sounds a lot like a version of preference utilitarianism, certainly an interesting perspective.

I know a lot of effort in political philosophy has gone into trying to define freedom - personally, I don't think it's been especially productive, and so I think 'freedom' as a term isn't that useful except as rhetoric. Emphasising 'fulfilment of preferences' is an interesting approach, though. It does run into tricky questions around the source of those preferences (eg addiction).

Load more