Comment author: remmelt  (EA Profile) 06 May 2017 04:21:42PM 1 point [-]

I appreciate this article because it makes these emotional problems – and ways to prevent and deal with them – visible and dispels the impression that we're all rational, calculating evaluators, all of the time. I recall 2 cases in the EA community of people who I chatted with online in the last year who seemed (disclaimer: by my amateurish reasoning for their extreme behaviour) to experience mania and/or psychosis at a point.

Comment author: remmelt  (EA Profile) 27 April 2017 10:07:51AM *  4 points [-]

I thought this was a really useful framework to look at the system-level. Thank you for posting this!

Quick points after just reading through it:

1) Your phrasing seems to convey too much certainty to me/flowed too much into a coherent story. I'm not sure if you did this too strongly bring across your points or because that's the confidence level you have in your arguments.

2)

If you want to acquire control over something, that implies that you think you can manage it more sensibly than whoever is in control already.

To me, it appears that you view Holden's position of influence at Open AI as something like a zero-sum alpha investment decision (where his amount of control replaces someone else's commensurate control). I don't see why Holden also couldn't have a supportive role where his feedback and different perspectives can help Open AI correct for aspects they've overlooked.

3) Overall principle I got from this: correct for model error through external data and outside views.

Comment author: Kerry_Vaughan 21 April 2017 04:53:04PM 8 points [-]

As much as I admire the care that has been put into EA Funds (e.g. the 'Why might you choose not to donate to this fund?' heading for each fund), this sentence came across as 'too easy' for me. To be honest, it made me wonder if the analysis was self-critical enough (I admit to having scanned it) as I'd be surprised if the trusted people you spoke with couldn't think of any significant risks. I also think 'largely positive' reception does not seem like a good indicator.

I agree. This was a mistake on my part. I was implicitly thinking about some of the recent feedback I'd read on Facebook and was not thinking about responses to the initial launch post.

I agree that it's not fair to say that the criticism have been predominately about website copy. I've changed the relevant section in the post to include links to some of the concerns we received in the launch post.

I'd like to develop some content for the EA Funds website that goes into potential harms of EA Funds that are separate from the question of whether EA Funds is the best option right now for individual donors. Do you have a sense of what concerns seem most compelling or that you'd particularly like to see covered?

Comment author: remmelt  (EA Profile) 27 April 2017 07:46:31AM 1 point [-]

I forgot to do a disclosure here (to reveal potential bias):

I'm working on the EA Community Safety Net project with other committed people, which just started on 31 March. We're now shifting direction from focusing on peer insurance against income loss to building a broader peer-funding platform in a Slack Team that also includes project funding and loans.

It will likely fail to become a thriving platform that hosts multiple financial instruments given the complexities involved and the past project failures I've seen on .impact. Having said that, we're aiming high and I'm guessing there's a 20% chance that it will succeed.

I'd especially be interested in hearing people's thoughts on structuring the application form (i.e. criteria for project framework) to be able to reduce Unilateralist's Curse scenarios as much as possible (and other stupid things we could cause as entrepreneurial creators who are moving away from the status quo).

Is there actually a list of 'bad strategies naive EAs could think off' where there's a consensus amongst researchers that one party's decision to pursue one of them will create systemic damage on an expected value basis? A short checklist (that I can go through before making an important decision) based on surveys would be really useful to me.

Come to think of this: I'll start by with a quick Facebook poll in the general EA group. That sounds useful for compiling an initial list.

Any other opinions on preventing risks here are really welcome. I'm painfully aware of my ignorance here.

Comment author: Kerry_Vaughan 21 April 2017 04:53:04PM 8 points [-]

As much as I admire the care that has been put into EA Funds (e.g. the 'Why might you choose not to donate to this fund?' heading for each fund), this sentence came across as 'too easy' for me. To be honest, it made me wonder if the analysis was self-critical enough (I admit to having scanned it) as I'd be surprised if the trusted people you spoke with couldn't think of any significant risks. I also think 'largely positive' reception does not seem like a good indicator.

I agree. This was a mistake on my part. I was implicitly thinking about some of the recent feedback I'd read on Facebook and was not thinking about responses to the initial launch post.

I agree that it's not fair to say that the criticism have been predominately about website copy. I've changed the relevant section in the post to include links to some of the concerns we received in the launch post.

I'd like to develop some content for the EA Funds website that goes into potential harms of EA Funds that are separate from the question of whether EA Funds is the best option right now for individual donors. Do you have a sense of what concerns seem most compelling or that you'd particularly like to see covered?

Comment author: remmelt  (EA Profile) 27 April 2017 12:52:21AM *  1 point [-]

I haven't looked much into this but basically I'm wondering if simple, uniform promotion of EA Funds would undermine the capacity of community members in say the upper quartile of rationality/commitment to built robust idea sharing and collaboration networks.

In other words, whether it would decrease their collective intelligence pertaining to solving cause-selection problems. I'm really interested in getting practical insights on improving the collective intelligence of a community (please send me links: remmeltellenis[at]gmail.dot.com)

My earlier comment seems related to this:

Put simply, I wonder if going for a) centralisation would make the 'system' fragile because EA donors would be less inclined to build up their awareness of big risks. For those individual donors who'd approach cause-selection with rigour and epistemic humility, I can see b) being antifragile. But for those approaching it amateuristically/sloppily, it makes sense to me that they're much better off handing over their money and employing their skills elsewhere.

(Btw, I admire your openness to improving analysis here.)

Comment author: Kerry_Vaughan 21 April 2017 05:11:07PM 8 points [-]

But if I can't convince them to fund me for some reason and I think they're making a mistake, there are no other donors to appeal to anymore. It's all or nothing.

The upside of centralization is that it helps prevent the unilateralist curse for funding bad projects. As the number of funders increases, it becomes increasingly easy for the bad projects to find someone who will fund them.

That said, I share the concern that EA Funds will become a single point of failure for projects such that if EA Funds doesn't fund you, the project is dead. We probably want some centralization but we also want worldview diversification. I'm not yet sure how to accomplish this. We could create multiple versions of the current funds with different fund managers, but that is likely to be very confusing to most donors. I'm open to ideas on how to help with this concern.

Comment author: remmelt  (EA Profile) 27 April 2017 12:22:58AM *  1 point [-]

I hadn't considered the unilateralist's curse and I'll keep this in mind.

To what extent do you think it's sustainable to

a) advocate for a centralised system run by trusted professionals VS.

b) building up the capacity of individual funders to recognise activities that are generally seen as problematic/negative EV by cause prioritisation researchers?

Put simply, I wonder if going for a) centralisation would make the 'system' fragile because EA donors would be less inclined to build up their awareness of big risks. For those individual donors who'd approach cause-selection with rigour and epistemic humility, I can see b) being antifragile. But for those approaching it amateuristically/sloppily, it makes sense to me that they're much better off handing over their money and employing their skills elsewhere.

I admit I don't have a firm grasp of unilateralist's curse scenarios.

Comment author: remmelt  (EA Profile) 20 April 2017 11:51:05PM 18 points [-]

While this way of gauging feedback is far from perfect, our impression is that community feedback has been largely positive. Where we’ve received criticism it has mostly been around how we can improve the website and our communication about EA Funds as opposed to criticism about the core concept.

As much as I admire the care that has been put into EA Funds (e.g. the 'Why might you choose not to donate to this fund?' heading for each fund), this sentence came across as 'too easy' for me. To be honest, it made me wonder if the analysis was self-critical enough (I admit to having scanned it) as I'd be surprised if the trusted people you spoke with couldn't think of any significant risks. I also think 'largely positive' reception does not seem like a good indicator. If a person like Eliezer would stand out as the sole person in disagreement, that should give pause for thought.

Even though the article is an update, I'm somewhat concerned by that it goes little into possible long-term risks. One that seems especially important is the consequences of centralising fund allocation (mostly to managers connected to OP) to having a diversity of views and decentralised correction mechanisms within our community. Please let me know where you think I might have made mistakes/missed important aspects.

I especially want to refer to Rob Wiblin's earlier comment: http://effective-altruism.com/ea/17v/ea_funds_beta_launch/aco

I love EA Funds, but my main concern is that as a community we are getting closer and closer to a single point of failure. If OPP reaches the wrong conclusion about something, there's now fewer independent donors forming their own views to correct them. This was already true because of how much people used the views of OPP and its staff to guide their own decisions.

We need some diversity (or outright randomness) in funding decisions for robustness.

Comment author: Daniel_Eth 26 March 2017 01:56:12AM 7 points [-]

This makes me wish we had basic income - I feel like the need for some income to fulfill basic needs stops people from "taking risks" and pursuing these sorts of projects.

Comment author: remmelt  (EA Profile) 06 April 2017 07:22:01PM *  2 points [-]

My approach here is to look for ways to help people in the EA community save money on basic needs. A pattern I'm noticing is that they often seem to be good for community building too.

Examples of this:

1) The EA Safety net project, which I've just started working on with dedicated others.

2) Shared housing for people involved with EA & rationality. An especially promising example is the Accelerator Project, I think. I've also found 19 rationality/EA houses around the world so far (I'm slowly working on getting one going in the Netherlands).

3) Even simpler: couchsurfing

I think that scaling cost-saving solutions like these are a more promising area to explore than funding basic incomes (depending on how many people take part for the time put into kickstarting the project). Whether spending time on starting a cost-saving project yourself is worth it does depend on your skills and opportunities.

For me, funding movement building/far future orgs generally makes more sense than a basic income (most of which goes to giving a coordinated group of people incomes so they can take risks) unless a basic income would target high-potential people only. Or perhaps you could fund someone to start a cost-savings project. :-)

In response to Utopia In The Fog
Comment author: remmelt  (EA Profile) 29 March 2017 10:11:59AM 2 points [-]

Great to see a nuanced different perspective I'd be interested in how work on existing multi-agent problems can be translated into improving the value-alignment of a potential singleton (reducing the risk of theoretical abstraction uncoupling from reality with).

Amateur question: would it help to also include back-of-the-envelop calculations to make your arguments more concrete?

Comment author: remmelt  (EA Profile) 21 March 2017 01:06:27PM *  0 points [-]

This was insightful for me. I'd especially be interested in the impact evaluation models.

Please add me to the mailing list: remmeltellenis{at]gmail(dot}com

Comment author: remmelt  (EA Profile) 18 March 2017 11:21:30AM *  8 points [-]

Thank you for this article. My own concern is that I've personally had little access to guidance on movement building (in the Netherlands) from people more experienced/knowledgeable on this area. I've therefore have had to try to understand the risks and benefits for considerations like EA's 'strictness' vs. the number of people it appeals to myself. I don't think someone with a 'coordinator' role like me should be expected to rigorously compile and evaluate research on movement building by him or herself.

My default position with which I started last year was to get as many concrete EA actions happening immediately as possible (e.g. participants, donations, giving pledges, career changes, etc.) to create the highest multiplier that I could. What intuitively follows from that is to do lots of emotionally appealing marketing (and get on TV if possible). I've encountered other people at other local groups and national organisations who seemed to think this way at least in part (I'm probably exaggerating this paragraph slightly, also to drive home my point).

I'd like to point out that in the last year, EA Flanders, EA France, EA Australia and EA Netherlands (and probably others which I've missed) have launched. Also, the number of local groups still seem to be growing rapidly (I count 313 on EA Hubs though some are missing and some are inactive). I think it would be a mistake to (implicitly) assume that the entrepreneurial people taking initiative here will come to conclusions that incorporate the currently-known risks and benefits to the future impact of the EA movement by themselves.

If the CEA Research Division is building an expertise in this area, I would suggest it starts giving the option to these individual leaders of grassroots local & national groups (with say member count >20) to do short Skype calls where it can share and discuss its insights on movement building as it's relevant in each local context.

I'd happily connect CEA researchers with other interested national group leaders to test this out (and be the first to sign up myself). Feel free to send me a quick message at remmeltellenis(at}gmail{dot)com (an alternative is to go through CEA Chapters/the Group Organisers call.)

Edits: some spelling & grammar nitpicks, further clarification and call-to action.

View more: Next