Comment author: RandomEA 21 May 2018 07:26:51PM 1 point [-]

First you need to look for what activities you think are most impactful, and then see what your money can generate vs your time.

This statement could be interpreted as suggesting that people should use a two-step process: first, choose a problem based on how pressing it is and then second, decide how to contribute to solving that problem.* That two-step approach would be a bad idea because some people may be able to make a greater impact working on a less pressing problem if they are especially effective at addressing that problem. Because of this, information about how pressing different problems are relative to each other should not be used to choose a single problem; instead, it should be used as background information when comparing careers across problems.

*I doubt that's what you actually meant since you wrote the linked article that discusses personal fit. But I figured some people might be unfamiliar with that article, so I thought it'd be worthwhile to note the issue.

Comment author: Robert_Wiblin 21 May 2018 10:46:46PM 1 point [-]

Yes - the reason you need to look at a bunch of activities rather than just one activity, is that your personal fit, both in general, and between earning vs direct work, could materially reorder them.

Comment author: Denise_Melchin 20 May 2018 11:42:00PM *  23 points [-]

Thanks for trying to get a clearer handle on this issue by splitting it up by cause area.

One gripe I have with this debate is the focus on EA orgs. Effective Altruism is or should be about doing the most good. Organisations which are explicitly labelled Effective Altruist are only a small part of that. Claiming that EA is now more talent constrained than funding constrained implicitly refers to Effective Altruist orgs being more talent than funding constrained.

Whether 'doing the most good' in the world is more talent than funding constrained is much harder to prove but is the actually important question.

If we focus the debate on EA orgs and our general vision as a movement on orgs that are labelled EA, the EA Community runs the risk of overlooking efforts and opportunities which aren't branded EA.

Of course fixing global poverty takes more than ten people working on the problem. Filling the funding gap for GiveWell recommended charities won't be enough to fix it either. Using EA branded framing isn't special to you - but it can make us lose track of the bigger picture of all the problems that still need to be solved, and all the funding that is still needed for that.

If you want to focus on fixing global poverty, just because EA focuses on GW recommended charities doesn't mean EtG is the best approach - how about training to be a development economist instead? The world still needs more than ten additional ones of that. (Edit: But it is not obvious to me whether global poverty as a whole is more talent or funding constrained - you'd need to poll leading people who actually work in the field, e.g. leading development economists or development professors.)

Comment author: Robert_Wiblin 21 May 2018 03:01:54PM 3 points [-]

"Claiming that EA is now more talent constrained than funding constrained implicitly refers to Effective Altruist orgs being more talent than funding constrained."

It would be true if that were what was meant, but the speaker might also mean that 'anything which existing EA donors like Open Phil can be convinced to fund' will also be(come) talent constrained.

Inasmuch as there are lots of big EA donors willing to change where they give, activities that aren't branded as EA may still be latently talent constrained, if they can be identified.

The speaker might also think activities branded as EA are more effective than the alternatives, in which case the money/talent balance within those activities will be particularly important.

Comment author: Robert_Wiblin 21 May 2018 02:55:42PM 2 points [-]

Like you, at 80,000 Hours we view the relative impact of money vs talent to be specific to particular problems and potentially particular approaches too.

First you need to look for what activities you think are most impactful, and then see what your money can generate vs your time.

Comment author: Robert_Wiblin 24 April 2018 11:47:12PM 10 points [-]

This is a useful analysis, I expect it will be incorporated into our discussion of discount rates in the career guide.

Perhaps I missed it but how many of the 7 who left the 50% category went into the 10% category rather than dropping out entirely?

Comment author: Robert_Wiblin 20 April 2018 06:39:13PM 4 points [-]

I made a similar observation about AI risk reduction work last year:

"Someone taking a hard 'inside view' about AI risk could reasonably view it as better than AMF for people alive now, or during the rest of their lives. I'm thinking something like:

1 in 10 risk of AI killing everyone within the next 50 years. Spending an extra $1 billion on safety research could reduce the size of this risk by 1%.

$1 billion / (0.1 risk * reduced by 1% * 8 billion lives) = $125 per life saved. Compares with $3,000-7,000+ for AMF.

This is before considering any upside from improved length or quality of life for the present generation as a result of a value-aligned AI.

I'm probably not quite as optimistic as this, but I still prefer AI as a cause over poverty reduction, for the purposes of helping the present generation (and those remaining to be born during my lifetime)."

Comment author: Robert_Wiblin 08 April 2018 01:12:17AM *  6 points [-]

Now that I've had a chance to read this properly I have one key follow-up question.

People often talk about the possibility of solving societal coordination problems with cryptocurrency, but I am yet to see a concrete example of this.

Is it possible to walk through a coordination failure that today could be best tackled using blockchain technology, and step-by-step how that would work?

This would be most persuasive if the coordination failure was in one of the priority problems mentioned above, but I'd find any specific illustration very helpful.

Comment author: rhys_lindmark 04 April 2018 05:10:59PM *  3 points [-]
Comment author: Robert_Wiblin 04 April 2018 08:39:54PM *  0 points [-]
Comment author: Robert_Wiblin 04 April 2018 12:16:28AM *  7 points [-]

Thanks for writing this up, you cover a lot of ground. I don't have time to respond to it now, but I wanted to link to a popular counterpoint to the practical value of blockchain technology: Ten years in, nobody has come up with a use for blockchain.

Comment author: Robert_Wiblin 04 December 2017 12:44:10AM *  9 points [-]

"Within countries, per capita GDP growth does not appear to lead to corresponding increases in well-being."

I spent a bunch of time looking into this 'Easterlin paradox' and concluded it's more likely than not that it doesn't exist. If you look across all the countries we have data on up to the present day, increased income is indeed correlated with increased levels of SWB. Not all things are positional or absolute, it's just a mix.

My impression is that people who study this topic are divided on the correct interpretation of the data, so you should take everyone's views (including mine) with a pinch of salt.

Comment author: Halffull 18 November 2017 12:11:17AM -1 points [-]

Imagine two epistemic peers estimating the weighting of a coin. They start with their probabilities bunched around 50% because they have been told the coin will probably be close to fair. They both see the same number of flips, and then reveal their estimates of the weighting. Both give an estimate of p=0.7. A modest person, who correctly weights the other person's estimates as equally as informative as their own, will now offer a number quite a bit higher than 0.7, which takes into account the equal information both of them has to pull them away from their prior.

This is what I'm talking about when I say "jut so stories" about the data from the GJP. One explanation is that superforecasters are going through this thought process, another would be that they discard non-superforecasters' knowledge, and therefore end up as more extreme without explicitly running the extremizing algorithm on their own forecasts.

Similarly, the existence of super-forecasters themselves argues for a non-modest epistemology, while the fact that the extremized aggregation beats the superforecasters may argue for somewhat of a more modest epistemology. Saying that the data here points one way or the other to my mind is cherrypicking.

Comment author: Robert_Wiblin 18 November 2017 01:00:17AM *  1 point [-]

"...the existence of super-forecasters themselves argues for a non-modest epistemology..."

I don't see how. No theory on offer argues that everyone is an epistemic peer. All theories predict some people have better judgement and will be reliably able to produce better guesses.

As a result I think superforecasters should usually pay little attention to the predictions of non-superforecasters (unless it's a question on which expertise pays few dividends).

View more: Next