Comment author: remmelt  (EA Profile) 15 August 2018 02:36:30PM 7 points [-]

What are some open questions that you’d like to get input on here (preferably of course from people who have enough background knowledge)?

This post reads to me like an explanation of why your current approach makes sense (which I find mostly convincing). I’d be interested in what assumptions you think should be tested the most here.

Comment author: remmelt  (EA Profile) 14 August 2018 01:23:03AM *  2 points [-]

Hey, a rough point on a doubt I have. Not sure if it's useful/novel.

Going through the mental processes of a utilitarian (roughly defined) will correlate with others making more utilitarian decisions as well (especially when they're similar in relevant personality traits and their past exposure to philosophical ideas).

For example, if you act less scope-insensitive, ommission-bias-y, or ingroup-y, others will tend to do so as well. This includes edge cases – e.g. people who otherwise would have made decisions that roughly fall in the deontologist or virtue ethics bucket.

Therefore, for every moment you end up shutting off utilitarian-ish mental processes in favour of ones where you think you're doing moral trade (including hidden motivations like rationalising acting from social proof or discomfort in diverging from your peers), your multi-universal compatriots will do likewise (especially in similar contexts).

(In case it looks like I'm justifying being a staunch utilitarian here, I have a more nuanced anti-realism view mixed in with lots of uncertainty on what makes sense.)

In response to comment by remmelt  (EA Profile) on Open Thread #40
Comment author: John_Maxwell_IV 17 July 2018 10:43:12PM *  4 points [-]

Yeah. I feel like the EA community already has a discussion platform with very granular topic divisions in Facebook, and yet here were are. I'm not exactly sure why the EA forum seems to me like it's working better than Facebook, but I figure if it's not broken don't fix it. Also, I think something like the EA Forum is inherently a bit more fragile than Facebook... any Facebook group is going to benefit from Facebook's ubiquity as a communication tool/online distraction.

You made a list of posts that we’re missing out on now... those kinda seem like the sort of posts I see on EA facebook groups, but maybe you disagree?

Comment author: remmelt  (EA Profile) 18 July 2018 09:39:56PM *  0 points [-]

Could you give a few reasons why you the EA Forum seems to works better than the Facebook groups in your view?

The example posts I gave are on the extreme end of the kind of granularity I'd personally like to see more of (I deliberately made them extra specific to make a clear case). I agree those kinds of posts tend to show up more in the Facebook groups (though the writing tends to be short there). Then there seems to be stuff in the middle that might not fit well anywhere.

I feel now that the sub-forum approach should be explored much more carefully than I did when I wrote the comment at the top. In my opinion, we (or rather, Marek :-) should definitely still run contained experiments on this because on our current platform it's too hard to gather around topics narrower than being generally interested in EA work (maybe even test a hybrid model that allows for crossover between the forum and the Facebook groups).

So I've changed my mind from a naive 'we should overhaul the entire system' view to 'we should tinker with it in ways we expect would facilitate better interactions, and then see if they actually do' view.

Thanks for your points!

In response to comment by remmelt  (EA Profile) on Open Thread #40
Comment author: John_Maxwell_IV 13 July 2018 10:02:35AM *  6 points [-]

This sounds like it might be a bad idea to me. I just wrote a long comment about the difficulty the EA community has in establishing Schelling points. This forum strikes me as one of the few successful Schelling points in EA. I worry that if subforums are done in a careless way, dividing a single reasonably high-traffic forum into lots of smaller low-traffic ones, one of the few Schelling points we have will be destroyed.

Comment author: remmelt  (EA Profile) 17 July 2018 10:58:44AM *  2 points [-]

Another problem would be when creating extra sub-forums would result in people splitting their conversations up more between those and the Facebook and Google groups. Reminds me of the XKCD comic on the problem of creating a new universal standard.

I think you made a great point in your comment on that people need to do ‘intensive networking and find compromises’ before attempting to establish new Schelling points.

In response to comment by remmelt  (EA Profile) on Open Thread #40
Comment author: John_Maxwell_IV 13 July 2018 10:02:35AM *  6 points [-]

This sounds like it might be a bad idea to me. I just wrote a long comment about the difficulty the EA community has in establishing Schelling points. This forum strikes me as one of the few successful Schelling points in EA. I worry that if subforums are done in a careless way, dividing a single reasonably high-traffic forum into lots of smaller low-traffic ones, one of the few Schelling points we have will be destroyed.

Comment author: remmelt  (EA Profile) 17 July 2018 10:32:57AM *  0 points [-]

Hmm, would you think Schelling points would still be destroyed if it was just clearer where people could meet to discuss certain specific topics besides a ‘common space’ where people could post on topics that are relevant to many people?

I find the comment you link to really insightful but I doubt whether it neatly applies here. Personally, I see a problem with that we should have more well-defined Schelling points as the community grows but that currently the EA Forum is a vague place to go to ‘to read and write posts on EA’. Other places for gathering to talk about more specific topics are widely dispersed over the internet – they’re both hard to find and disconnected from each other (i.e. it’s hard to zoom in and out of topics as well as explore parallel topics that once can work on and discuss).

I think you’re right that you don’t want to accidentally kill off a communication platform that actually kind of works. So perhaps a way of dealing with this is to maintain the current EA Forum structure but then also test giving groups of people the ability to start sub-forums where they can coordinate around more specific Schelling points on ethical views, problem areas, interventions, projects, roles, etc. – conversations that would add noise for others if they did it on the main forum instead.

Comment author: Naryan 10 July 2018 09:06:14PM 1 point [-]

Fantastic post! It's a significant upgrade from the "terminal/instrumental values" mental model I was previously using.

When I first joined EA, I looked at the annual survey of EAs and was surprised to see so much variation in how EAs ranked the importance of the major causes. I thought that the group would be moving towards a consensus, and that each individual member would be able to trace their actions up towards their understanding of the most important causes.

Personally, I tried to build up my own understanding of the cause priority from strong foundations, doing my best to answer meta questions like "do I value all people equally", "how do I weight animal suffering vs human happiness". From there, I worked my way down the V2ADC, trying to meta-analyze the research on causes, eventually coming to an area that I felt confident was the best place to add value.

I think with a bit more nuance, the EA survey could serve as a good feedback mechanism to see where on the chain we all see ourselves, and to see if the sum of the parts adds up to anything resembling a consistent whole. Will the EA community end up converging in beliefs and strategy? Is it an elephant in the room to say that half of the people working on X cause aught to shift to Y cause because the people up the chain are confident that it is a better move for the community? Even if the exploratory folks at the bottom raised their evidence up the chain, would we have enough corrigibility to pivot? (Love that word, totally gonna use it more!)

Comment author: remmelt  (EA Profile) 11 July 2018 05:20:37AM *  1 point [-]

Hi @Naryan,

I’m glad that this is a more powerful tool for you.

And kudos for working things from the foundations up! Personally, I still need to take a few hours with a pen and paper to systematically work myself through the decision chain myself. A friend has been nudging me to do that. :-)

Gregory Lewis makes the argument above that some EAs are moving in the direction of working on long term future work and few are moving back out. I’m inclined to agree with him that they probably have good reasons for that.

I’d also love to see the results of some far mode — near mode questions put in the EA Survey or perhaps send out by Spencer Greenberg (not sure if there’s an existing psychological scale to gauge how much people are in each mode when working throughout the day). And of course, how they corellate with cause area preferences.

Max Dalton explained to me how ‘corrigiblity’ was one of the most important traits to look for for selecting people you want to work with at EA Global London last year so credit to him. :-) My contribution here is adding the distinction that people often seem more corrigible at some levels than others, especially when they’re new to the community.

(also, I love that sentence – “if the exploratory folks at the bottom raised evidence up the chain...”)

Comment author: Brendon_Wong 11 July 2018 12:28:03AM 3 points [-]

Thanks for the insight Remmelt! A good way to start this would be to create an MVP much like Ryan Carey suggested so that we can get started quickly, with a prebuilt application system (Google Forms, Google Docs, a forum, etc) and possibly using a DAF or fiscal sponsor. The web app itself could take a while, but having public projects and public feedback in a forum or something would be reasonably close and take much less effort.

I am meeting with someone who has made some progress in this area early next week. Based on traction and the similarity between the other person's system and this system, I'll see if a new venture in this space could add value, or if existing projects in this space have a good chance of succeeding. One way or the other I'll be in touch!

Comment author: remmelt  (EA Profile) 11 July 2018 04:52:41AM 1 point [-]

Great! Cool to hear how you’re already making traction on this.

Perhaps EAWork.club has potential as a launch platform?

I’d also suggest emailing Kerry Vaughan from EA Grants to get his perspective. He’s quite entrepreneurial so probably receptive to hearing new ideas (e.g. he originally started EA Ventures, though that also seemed to take the traditional granting approach).

Let me know if I can be of use!

In response to comment by remmelt  (EA Profile) on Open Thread #40
Comment author: Julia_Wise  (EA Profile) 10 July 2018 03:34:40PM 4 points [-]

CEA is thinking along these same lines for the new version of the Forum! The project manager is planning to reply with more detail in the next day or so.

In response to comment by Julia_Wise  (EA Profile) on Open Thread #40
Comment author: remmelt  (EA Profile) 10 July 2018 04:59:51PM 0 points [-]

Wow, nice! Would love to learn more.

Comment author: Gregory_Lewis 03 July 2018 11:38:20PM *  3 points [-]

Excellent work. I hope you'll forgive me taking issue with a smaller point:

Given the uncertainty they are facing, most of OpenPhil's charity recommendations and CEA's community-building policies should be overturned or radically altered in the next few decades. That is, if they actually discover their mistakes. This means it's crucial for them to encourage more people to do local, contained experiments and then integrate their results into more accurate models. (my emphasis)

I'm not so sure that this is true, although it depends on how big an area you imagine will / should be 'overturned'. This also somewhat ties into the discussion about how likely we should expect to be missing a 'cause X'.

If cause X is another entire cause area, I'd be pretty surprised to see a new one in (say) 10 years which is similar to animals or global health, and even more surprised to see one that supplants long term future. My rationale for this is I see broad funnel where EAs tend to move into the long term future/x-risk/AI, and once there they tend not to leave (I can think of a fair number of people who made the move from (e.g.) global health --> far future, but I'm not aware of anyone who moved from far future --> anything else). There are also people who have been toiling in the long term future vinyard for a long time (e.g. MIRI), and the fact we do not see many people moving elsewhere suggests this is pretty stable attractor.

There are other reasons for a cause area being a stable attractor besides all reasonable roads lead to it. That said, I'd suggest one can point to general principles which would somewhat favour this (e.g. the scope of the long term future, that the light cone commons, stewarded well, permits mature moral action in the universe to whatever in fact has most value, etc.) I'd say similar points to a lesser degree to apply to the broad landscape of 'on reflection moral commitments', and so the existing cause areas mostly exhaust this moral landscape.

Naturally, I wouldn't want to bet the farm on what might prove overconfidence, but insofar as it goes it supplies less impetus for lots of exploratory work of this type. At a finer level of granulariy (and so a bit further down your diagram), I see less resilience (e.g. maybe we should tilt the existing global poverty portfolio more one way or the other depending how the cash transfer literature turns out, maybe we should add more 'avoid great power conflict' to the long term future cause area, etc.) Yet I still struggle to see this adding up to radical alteration.

Comment author: remmelt  (EA Profile) 10 July 2018 12:19:26PM *  1 point [-]

First off, I was ambiguous in that paragraph about the level I actually thought decisions should be revised or radically altered. i.e. in say the next 20 years, did I think OpenPhil should revise most of the charities they fund, most of the specific problems they funded or broad focus areas? I think I ended up just expressing a vague sense of ‘they should change their decisions a lot if they put in much more of the community’s brainpower into analysing data from a granular level upwards’.

So I appreciate that you actually gave specific reasons for why you'd be surprised to see a new focus area being taken up by people in the EA community in the next 10 years! Your arguments make sense to me and I’m just going to take up your opinion here.

Interestingly, your interpretation that this is evidence for that there shouldn't be a radical alteration in what causes we focus can be seen both as an outside view and inside view. It's an outside view in the sense that it weights the views of people who've decided to move into the direction of working on the long term future. It's also an inside view in that it doesn't consider roughly what percentage of past cosmopolitan movements where members converged on working on a particular set of problems were seen as wrong by their successors decades later (and perhaps judged to have been blinded by some of the social dynamics you mentioned: groupthink, information cascades and selection effects).

A historical example where this went wrong is how in the 1920's Bertrand Russell and other contemporary intelligentia had positive views on communism and eugenics, which later failed in practice under Stalin's authoritarian regime and Nazi Germany, respectively. Although I haven't done a survey of other historical movements (has anyone compiled such a list?), I think I still feel slightly more confident than you that we'll radically alter what we'll work on after 20 years if we'd make a concerted effort now to structure the community around enabling a significant portion of our 'members' (say 30%) to work together to gather, analyse and integrate data at each level (whatever that means).

It does seems that we share some intuitions (e.g. the arguments for valuing future generations similarly to current generations seem solid to me). I've made a quick list on research that could lead to fundamental changes in what we prioritise at various levels. I'd be curious to hear if any of these points has caused you to update any of your other intuitions:

Worldviews

  • more neuroscience and qualia research, possibly causing fundamental shifts in our views on how we feel and register experiences

  • research into how different humans trade off suffering and eudaimonia differently

  • a much more nuanced understanding of what psychological needs and cognitive processes lead to moral judgements (e.g. the effect on psychological distance on deontologist vs. consequentialist judgements and scope sensitivity)

Focus areas:

Global poverty

  • use of better metrics for wellbeing – e.g. life satisfaction scores and future use of real-time tracking of experiential well-being – that would result in certain interventions (e.g. in mental health) being ranked higher than others (e.g. malaria)

  • use of better approaches to estimate environmental interactions and indirect effects, like complexity science tools, which could result in more work being done on changing larger systems through leverage points

Existential risk

  • more research on how to avoid evolutionary/game theoretical “Moloch” dynamics instead of the current "Maxipok" focus on ensuring that future generations will live and hope that they have more information to assess and deal with problems from there

  • for AI safety specifically, I could see a shift in focus from a single agent produced out of say a lab that presumably gets so powerful to outflank all other agents to analysing systems of more similarly capable agents owned by wealthy individuals and coalitions that interact with each other (e.g. like Robin Hanson's work on Ems) or perhaps more research on how a single agent could be made out of specialised sub-agents representing the interests of various beings. I could also see a shift in focus to assessing and ensuring the welfare of sentient algorithms themselves.

Animal welfare

  • more research on assessing sentience, including that of certain insects, plants and colonial ciliates that do more complex information processing, leading to changed views on what species to target

  • shift to working on wild animal welfare and ecosystem design, with more focus on marine ecosystems

Community building

  • Some concepts like high-fidelity spreading of ideas and strongly valuing honesty and considerateness seem robust

  • However, you could see changes like emphasising the integration of local data, the use of (shared) decision-making algorithms and a shift away from local events and coffee chats to interactions on online (virtual) platforms

Comment author: remmelt  (EA Profile) 10 July 2018 07:26:33AM *  8 points [-]

I’m grateful that someone wrote this post. :-)

Personally, I find your proposal of fusing three models promising. It does sound difficult to get right in terms of both technical web development and setting up the processes that actually enable users to use the grant website as it was set out to be used. It would probably require a lot of iterative testing as well as in-person meetings with stakeholders (i.e. this looks like a 3-year project).

I’d be happy to dedicate 5 hours per week for the next 3 months to contribute to working it out further with key decision makers in the community. Feel free to PM me on Facebook if you’d like to discuss it further.

Here are some further thoughts on why the EA Grants structure has severe limitations

My impression is that CEA staff have thoughtfully tried to streamline a traditional grant making approach (by, for example, keeping the application form short, deferring to organisations that have expertise in certain areas, and promising to respond in X weeks) but that they’re running up against the limitations of such a centralised system:

1) not enough evaluators specialised in certain causes and strategies who have the time to assess track records and dig into documents

2) a lack of iterated feedback between possible donors and project leaders (you answer many questions and then only hear about how CEA has interpreted your answers and what they think of you 2 months later)

Last year, I was particularly critical about that little useful feedback was shared with applicants after they were denied with a standard email. It’s valuable to know why your funding request is denied – whether it is because CEA staff lack domain expertise or because of some inherent flaws to your approach that you should be aware of.

But applicants ended up having to take the initiative themselves to email CEA questions because CEA staff never got around to emailing some brief reasoning for their decisions to the majority of the 700ish applicants that applied. On CEA’s side there was also the risk of legal liability – that someone upset by their decision could sue them if a CEA staff member shared rough notes they made that could easily be misinterpreted. So if you’re lucky you receive some general remarks and can then schedule a Skype call to discuss those further.

Further, you might discover then that a few CEA staff members have rather vague models of why a particular class of funding opportunities should not be accepted (e.g. one CEA staff member was particularly hesitant about funding EA groups last year because it would make coordinating things like outreach [edit] and having credible projects branded as EA more difficult).

Finally, this becomes particularly troublesome when outside donors lean too heavily on CEA’s accept/deny decision (which I think happened at least once with EA Netherlands, the charity I’m working at). You basically have to explain to all future EA donors that you come into contact with why your promising start-up wasn’t judged to be impactful enough to fund by one of the most respected EA organisations.

I’d be interested in someone from the EA Grants team sharing their perspective on all this.

View more: Next