Comment author: Henry_Stanley 09 February 2018 12:18:43AM 4 points [-]

Can confirm that the funds are held as cash, not invested.

Comment author: Arepo 11 February 2018 09:38:21PM 2 points [-]

Huh, that seems like a missed opportunity. I know very little about investing, but aren't there short-term investments with modest returns that would have a one-off setup cost for the fund, such that all future money could go into them fairly easily?


Founders Pledge hiring a Personal & Development Assistant in the US

  (disclaimer in case not evident - I work at FP)   Full ad at   PERSONAL & DEVELOPMENT ASSISTANT ($40,000 - $45,000) About Us Founders Pledge is a global community of founders and investors who have collectively pledged more than $420,000,000 of their personal proceeds from exit (business... Read More
Comment author: adamaero  (EA Profile) 02 February 2018 04:28:50AM *  0 points [-]

Of the three Black Mirror episodes I've seen, this reminds me of Nosedive:

I'm not saying such a program would succumb to such a weird state in our culture. Just a fun little aside. Regardless, I think if the EA insurance program happened, it would be awesome! That goes to say, there's a lot of different ideas in this article. I don't think our emerging movement is big enough...even for an insurance program. What do I know about starting an insurance firm? Nada.

Although, I do not trust people solely concerned about AI safety ;)

In response to comment by adamaero  (EA Profile) on The almighty Hive will
Comment author: Arepo 02 February 2018 11:01:45PM *  0 points [-]

Keep in mind such insurance can happen at pretty much any scale - per Joey's description (above) of richer EAs just providing some support for poorer friends (even if the poorer friends are actually quite wealthy and E2G), for organisations supporting their employees, donors supporting their organisations (in the sense of giving them licence to take risks of financial loss that have positive EV), or EA collectives (such as the EA funds) backing any type of smaller entity.

Comment author: Arepo 01 February 2018 12:39:46AM 6 points [-]

I also feel that, perhaps not now but if they grow much more, it would be worth sharing the responsibility among more than just one person per fund. They don't have to disagree vociferously on many subjects, just provide a basic sanity check on controversial decisions (and spreading the work might speed things up if research time is a limiting factor)

Comment author: RomeoStevens 29 January 2018 08:55:44PM *  5 points [-]

This is a big part of why I find the 'EA is talent constrained not funding constrained' meme to be a bit silly. The obvious counter is to spend money learning how to convert money into talent. I haven't heard of anyone focusing on this problem as a core area, but if it's an ongoing bottleneck then it 'should' be scoring high on effective actions.

There is a lot of outside view research on this that could be collected and analyzed.

Comment author: Arepo 30 January 2018 08:37:05PM 2 points [-]

I pretty much agree with this - though I would add that you could also spend the money on just attracting existing talent. I doubt the Venn diagram of 'people who would plausibly be the best employee for any given EA job' and 'people who would seriously be interested in it given a relatively low EA wage' always forms a perfect circle.

Comment author: DavidMoss 30 January 2018 06:33:33PM 3 points [-]

I didn't read the post as meaning either "scale is bad if it is the only metric that is used" or "Scale, neglectedness, solvability is only one model for prioritisation. It's useful to have multiple different models...."

When looking at scale in a scale, neglectedness, tractability, framework, it's true that the other factors can offset the influence of scale. e.g. if something is large in scale but intractable, the intractability counts against the cause being considered and at least somewhat offsets the consideration that the cause is large in scale. But this doesn't touch on the point this post makes, which is that looking at scale itself as a consideration, the 'total scale' may be of little or no relevance to the evaluation of the cause, and rather 'scale' is only of value up to a given bottleneck and of no value beyond that. I almost never see people talking of scale in this way in the context of a scale, neglectedness, tractability, framework: dividing up the total scale into tractable bits, less tractable bits and totally intractable bits. Rather, I more typically see people assigning some points for scale, evaluating tractability independently and assigning some points for that and evaluating neglectedness independently and assigning some points for that.

Comment author: Arepo 30 January 2018 08:27:33PM 1 point [-]

I read this the same way as Max. The issue of cost to solve (eg) all cases of malaria is really tractability, not scale. Scale is how many people would be helped (and to what degree) by doing so. Divide the latter by the former, and you have a sensible-looking cost-benefit analysis, (that is sensitive to the 'size and intensity of the problem', ie the former).

I do think there are scale-related issues with drawing lines between 'problems', though - if a marginal contribution to malaria nets now achieves twice as much good as the same marginal contribution would in 5 years, are combatting malaria now and combatting malaria in five years 'different problems', or do you just try to average out the cost-benefit ratio between somewhat arbitrary points (eg now and when the last case of malaria is prevented/cured). But I also think the models Max and Owen have written about on the CEA blog do a decent job of dealing with this kind of question.

Comment author: Greg_Colbourn 30 January 2018 06:23:38PM 1 point [-]

Regarding potentially tax deductible items mentioned in section 5, usually accommodation or anything regarded as for personal use is not included. It would be regarded as payment in kind and therefore taxable (and also make the tax reporting more complicated!) This in the UK at least. E.g.

Comment author: Arepo 30 January 2018 08:11:38PM *  2 points [-]

I had a feeling that might be the case. That page still leaves some possible alternatives, though, eg this exemption:

an employer is usually expected to provide accommodation for people doing that type of work (for example a manager living above a pub, or a vicar looking after a parish)

It seems unlikely, but worth looking at whether developing a sufficient culture of EA orgs offering accommodation might suffice the 'usually expected' criterion.

It also seems a bit vague about what would happen if the EA org actually owned the accommodation rather than reimbursing rent as an expense, or if a wealthy EA would-be donor did, and let employees (potentially of multiple EA orgs) stay in it for little or no money (and if so, in the latter case, whether 'wealthy would-be donor' could potentially be a conglomerate a la EA funds)

There seems at least some precedent for this in the UK in that some schools and universities offer free accommodation to their staff, which don't seem to come under any of the exemptions listed on the page.

Obviously other countries with an EA presence might have more/less flexibility around this sort of thing. But if you have an organisation giving accommodation to 10 employees in a major developed world city, it seems like you'd be saving (in the UK) 20% tax on something in the order of £800 per month per employee, ie about £7600 per year, which seems like a far better return than investing the money would get (not to mention, if it's offered as a benefit for the job, being essentially doubly invested - once on the tax savings, once on the normal value of owning a property).

So while I'm far from confident that it would be ultimately workable, it seems like there would be high EV in an EA with tax law experience looking into it in each country with an EA org.


The almighty Hive will

I’ve been wondering whether EA can’t find some strategic benefits from a) a peer-to-peer trust economy, or b) rational coordination towards various goals. These seem like simple ideas, but I haven’t seen them publicly discussed.  I’ll start from the related and oversimplifying assumptions that  a) there’s a wholly fungible pool of... Read More
Comment author: hollymorgan 05 November 2017 01:46:05AM 2 points [-]

I suggest summarising your reasoning as well as your conclusion in your tl;dr e.g. adding something like the following: "as neglectedness is not a useful proxy for impact w/r/t many causes, such as those where progress yields comparatively little or no ‘good done’ until everything is tied together at the end, or those where progress benefits significantly from economies of scale."

Comment author: Arepo 06 November 2017 08:16:15PM 1 point [-]

Ta Holly - done.

Comment author: caspar42 02 November 2017 09:48:43PM 4 points [-]

A few of the points made in this piece are similar to the points I make here:

For example, the linked piece also argues that returns may diminish in a variety of different ways. In particular, it also argues that the returns diminish more slowly if the problem is big and that clustered value problems only produce benefits once the whole problem is solved.

Comment author: Arepo 06 November 2017 08:04:13PM *  0 points [-]

Just read this. Nice point about future people.

It sounds like we agree on most of this, though perhaps with differing emphasis - yy feeling is that neglectedness such a weak heuristic that we should abandon it completely, and at the very least avoid making it a core part of the idea of effective altruism. Are there cases where you would still advocate using it?

View more: Next