Comment author: [deleted] 04 July 2018 03:51:24PM 1 point [-]

Or add a tl;dr

Comment author: remmelt  (EA Profile) 04 July 2018 08:33:05PM *  1 point [-]

Hmm, I personally value say five people deeply understanding the model to be able to explore and criticise it over say a hundred people skimming through a tl;dr. This is why I didn’t write one (besides it being hard to summarise anything more than ‘construal levels matter – you should consider them in the interactions you have with others’, which I basically do in the first two paragraphs). I might be wrong of course because you’re the second person who suggested this.

This post might seem deceptively obvious. However, I put a lot of thinking into both refining categories and the connections between them and explaining them in a way that hopefully enables someone to master them intuitively if they take the time to actively engage with the text and diagrams. I probably did make a mistake by outlining both the model and its implications in the same post because it makes it unclear what it’s about and causes discussions here in the comment section to be more diffuse (Owen Cotton-Barratt mentioned this to me).

If someone prefers to not read the entire post, that’s fine. :-)

Comment author: vollmer  (EA Profile) 04 July 2018 02:13:17PM 2 points [-]

I like the model a lot, thanks for posting!

One input: I think it could be useful to find a term for it that's easier to memorize (and has a shorter abbreviation).

Comment author: remmelt  (EA Profile) 04 July 2018 03:56:43PM *  2 points [-]

Hmm, I can’t think of a clear alternative to ‘V2ADC’ yet. Perhaps ‘decision chain’?

Comment author: Denise_Melchin 04 July 2018 02:53:37PM 6 points [-]

I think you're making some valuable points here (e.g. making sure information is properly implemented into the 'higher levels') but I think your posts would have been a lot better if had skipped all the complicated modelling and difficult language. It strikes me as superfluous and the main result seems to me that it makes your post harder to read without adding any content.

Comment author: remmelt  (EA Profile) 04 July 2018 03:54:25PM 1 point [-]

Hi Denise, can you give some examples of superfluous language? I tried to explain it as simply as possible (though sometimes jargon and links are needed to avoid having to explain concepts in long paragraphs) but I’m sure I still made it too complicated in places.

Comment author: Gregory_Lewis 03 July 2018 11:38:20PM *  3 points [-]

Excellent work. I hope you'll forgive me taking issue with a smaller point:

Given the uncertainty they are facing, most of OpenPhil's charity recommendations and CEA's community-building policies should be overturned or radically altered in the next few decades. That is, if they actually discover their mistakes. This means it's crucial for them to encourage more people to do local, contained experiments and then integrate their results into more accurate models. (my emphasis)

I'm not so sure that this is true, although it depends on how big an area you imagine will / should be 'overturned'. This also somewhat ties into the discussion about how likely we should expect to be missing a 'cause X'.

If cause X is another entire cause area, I'd be pretty surprised to see a new one in (say) 10 years which is similar to animals or global health, and even more surprised to see one that supplants long term future. My rationale for this is I see broad funnel where EAs tend to move into the long term future/x-risk/AI, and once there they tend not to leave (I can think of a fair number of people who made the move from (e.g.) global health --> far future, but I'm not aware of anyone who moved from far future --> anything else). There are also people who have been toiling in the long term future vinyard for a long time (e.g. MIRI), and the fact we do not see many people moving elsewhere suggests this is pretty stable attractor.

There are other reasons for a cause area being a stable attractor besides all reasonable roads lead to it. That said, I'd suggest one can point to general principles which would somewhat favour this (e.g. the scope of the long term future, that the light cone commons, stewarded well, permits mature moral action in the universe to whatever in fact has most value, etc.) I'd say similar points to a lesser degree to apply to the broad landscape of 'on reflection moral commitments', and so the existing cause areas mostly exhaust this moral landscape.

Naturally, I wouldn't want to bet the farm on what might prove overconfidence, but insofar as it goes it supplies less impetus for lots of exploratory work of this type. At a finer level of granulariy (and so a bit further down your diagram), I see less resilience (e.g. maybe we should tilt the existing global poverty portfolio more one way or the other depending how the cash transfer literature turns out, maybe we should add more 'avoid great power conflict' to the long term future cause area, etc.) Yet I still struggle to see this adding up to radical alteration.

Comment author: remmelt  (EA Profile) 04 July 2018 06:57:57AM *  0 points [-]

I appreciate you mentioning this! It’s probably not a minor point because if taken seriously, it should make me a lot less worried about people in the community getting stuck in ideologies.

I admit I haven’t thought this through systematically. Let me mull over your arguments and come back to you here.

BTW, could you perhaps explain what you meant with the “There are other causes of an area...” sentence? I’m having trouble understanding that bit.

And with ‘on-reflection moral commitments’ do you mean considerations like population ethics and trade-offs between eudaimonia and suffering?

Comment author: Peter_Hurford  (EA Profile) 03 July 2018 05:30:06PM 0 points [-]

Maybe via EA Grants?

Comment author: remmelt  (EA Profile) 04 July 2018 06:24:17AM 0 points [-]

@Peter, any idea how EA Grants could be used as an intermediary here? (I did apply myself to EA Grants but I’m not expecting to cover the financial runway of myself or EAN for any longer than 6 months with that)

Comment author: hollymorgan 01 July 2018 11:39:28PM 0 points [-]

Do you know of a tax-deductible way to support you as a UK donor?

Comment author: remmelt  (EA Profile) 02 July 2018 04:03:41AM 0 points [-]

Good question... I haven’t really thought about it but if it’s a £20,000+ donation perhaps EA Netherlands could register at the HMRC? https://www.givingwhatwecan.org/post/2014/06/tax-efficient-giving-guide-uk-donors/

Comment author: RomeoStevens 01 July 2018 04:35:55PM 2 points [-]

Good stuff! You might be interested in both OODA loops and Marr's levels of analysis.

Comment author: remmelt  (EA Profile) 01 July 2018 09:38:50PM *  0 points [-]

Thanks for the pointers!

Would you see OODA loops translated to V2ADC as cycling up and down (parts of) the chain as quickly as possible?

I found this article on Marr’s levels of analysis: http://blog.shakirm.com/2013/04/marrs-levels-of-analysis/ Seems like a useful way of guiding the creation of algorithms (never heard of it before – I don’t know much about of coding or AI frameworks).

Comment author: Eli_Nathan 01 July 2018 01:16:52PM 0 points [-]

By agentive I sort of meant "how effectively an agent is able to execute actions in accordance with their goals and values" - which seems to be independent of their values/how aligned they are with doing the most good.

I think this is a different scenario to the agent causing harm due to negative corrigibility (though I agree with your point about how this could be taken into account with your model).

It seems possible however that you could incorporate their values/alignment into corrigibility depending on one's meta-ethical stance.

Comment author: remmelt  (EA Profile) 01 July 2018 09:29:43PM *  0 points [-]

Ah, in this model, I see ‘effectiveness in executing actions according to values’ a result of lots of directed iteration of improving understanding at lower construal levels over time (reminds of the OODA loop that Romeo mentions above, will also look into the ‘levels of analysis’ now ). In my view, that doesn’t require an extra factor.

Which meta-ethical stance do you think this wouldn’t fit into the model? I’m curious to hear your thoughts to see where it fails to work.

Comment author: Eli_Nathan 01 July 2018 12:33:09PM 1 point [-]

I really liked this post and the model you've introduced!

With regards to your pseudomaths, a minor suggestion could be that your product notation is equal to how agentive our actor is. This could allow us to take into account impact that is negative (i.e., harmful processes) by then multiplying the product notation by another factor that takes into account the sign of the action. Then the change in impact could be proportional to the product of these two terms.

Comment author: remmelt  (EA Profile) 01 July 2018 12:53:47PM 0 points [-]

I'm happy to hear that it's useful for you. :-)

Could you clarify what you mean with agentive? The way I see it, at any of the levels from 'Values' to 'Actions', a person's position on the corrigibility scale could be so low to be negative. But it's not an elegant or satisfactory way of modelling it (i.e. different ways of adjusting poorly to evidence could still lead to divergent results from an extremely negative Unilateralist's Curse scenario to just sheer mediocrity)

14

The Values-to-Actions Decision Chain: a lens for improving coordination

This post contains: 1. an exposition of a high-level model 2. some claims on what this might mean strategically for the EA community Effective Altruism is challenging . Some considerations require you to zoom out to take an eagle-eye view across a vast landscape of possibility (e.g. to research moral... Read More

View more: Prev | Next