Comment author: Ben_Todd 10 February 2018 08:29:16AM 2 points [-]

The obvious counter is to spend money learning how to convert money into talent. I haven't heard of anyone focusing on this problem as a core area, but if it's an ongoing bottleneck then it 'should' be scoring high on effective actions.

This is what many of the core organisations are focused on :) You could see it as 80k's whole purpose. It's also why CEA is doing things like EA Grants, and Open Phil is doing the AI Fellowship.

It's also a central internal challenge for any org that has funding and is trying to scale. But it's not easy to solve:

Comment author: RomeoStevens 10 February 2018 04:11:34PM 2 points [-]

Is 80k trying to figure out how to interview the very best recruiters and do some judgmental bootstrapping?

Comment author: casebash 29 January 2018 11:58:09PM *  1 point [-]

Even if we had funds, the problem of who to fund is a hard one and perhaps it would be better spent simply hiring more staff for EA orgs? The best way to know that you can trust someone is to know them personally, but distributing funds in that manner creates concerns about favouritism, ect.

Comment author: RomeoStevens 30 January 2018 05:10:17AM *  4 points [-]

I strongly disagree on the grounds that these sorts of generators are, while not fully general counterarguments, sufficiently general that I think they are partially responsible for EAs having a bias towards inaction. Also, trying more cheap experiments when money is supposedly not the bottleneck.

Comment author: Kathy_Forth 29 January 2018 12:27:15AM *  7 points [-]

For this group to make an effective social safety net for EAs having a bad time, more is needed than just money. When a real problem actually does arise, people tend to spam that person with uninformed suggestions which won't work. They're trying to help, but due to the "what you see is all there is" bias and others, they can't see that they are uninformed and spamming. The result is that the problem doesn't seem real to anyone.

So, the person who has a problem, who may not have any time or emotional energy or even intellectual capacity left over, must explain why dozens of spitball suggestions won't work.

How spitballing can totally sabotage people in need of help:

Imagine that to obtain help, you have to patiently and rigorously evaluate dozens of ill-conceived suggestions, support your points, meet standards of evidence, seem to have a positive attitude about each suggestion, and try not to be too frustrated with the process and your life.

The task of convincing people your problem is real while a bunch of friends are accidentally spamming you with clever but uninformed suggestions might be the persuasive challenge of a lifetime. If any of the ill-conceived options still seem potentially workable to your friends, you will not be helped. To succeed at this challenge, you have to make sure that every spitball you receive from friends is thoroughly addressed to their satisfaction.

A person with a real problem will be doing this challenge when they're stressed out, time poor and emotionally drained. They are at their worst.

A person at their worst shouldn't need to take on the largest persuasive challenge of their lives at that time. To assume that they can do this is about as helpful as "Let them eat cake.".

There's an additional risk that people will sour on helping you if they see that lots of solution ideas are being rejected. This is despite the fact that the same friends will tell you "most ideas will fail" in other circumstances. They know that ideas are often useless, but instead of realizing that the specific set of ideas in question are uninformed or not helpful, some people will jump to the conclusion that the problem is your attitude.

Just the act of evaluating a bunch of uninformed spitball suggestions can get you rejected!

Making a distinction between a problem that is too hard for the person to solve, and a person who has a bad attitude about solving their problem is a challenge. It's hard for both sides to communicate well enough to figure this out. Often a huge amount of information has to be exchanged.

The default assumption seems to be that a person with a problem should talk to a bunch of friends about it to see if anyone has ideas. If you count up the number of hours it actually takes to discuss dozens of suggestions in detail multiplied by dozens of people, it's not pretty. For many people who are already burdened by a serious problem, that sort of time investment just is not viable. In some cases the entire problem is insufficient time, so it can be unfair to demand for them to do this.

In the event that potential helpers are not convinced the problem is real, or aren't convinced to take the actions that would actually work, the person in need of help could easily waste 100 hours or more with nothing to show for it. This will cause them to pass up other opportunities and possibly make their situation far worse due to things like opportunity costs and burnout.

Solution: well-informed advocates.

For this reason, people who are experiencing a problem need an advocate. The advocate can take on the burden of evaluating solution ideas and advocating in favor of a particular solution.

Given that it often requires a huge amount of information to predict which solution ideas will work and which solution ideas will fail, it is probably the case that an advocate needs to be well-informed about the type of problem involved, or at least knows what it is like to go through some sort of difficult time due to past experience.

Comment author: RomeoStevens 29 January 2018 08:59:50PM *  7 points [-]

Another framing of that solution: EA needs a full time counselor who works with EAs gratis. I expect that paying the salary of such a person would be +ROI.

Comment author: Kathy_Forth 29 January 2018 12:42:10AM *  6 points [-]

The main reason I'm not looking for a full-time EA job right now is because I don't have enough runway and financial security. I estimate that it will take around 2 years to accomplish the amount of financial security and runway I need. If you accomplish building a safety net, this might result in a surge of people going into EA jobs. I'm not sure how many people are building up runway right now, or how many hours of EA work you can grab by liberating them from that, but it could be a lot!

Comment author: RomeoStevens 29 January 2018 08:55:44PM *  6 points [-]

This is a big part of why I find the 'EA is talent constrained not funding constrained' meme to be a bit silly. The obvious counter is to spend money learning how to convert money into talent. I haven't heard of anyone focusing on this problem as a core area, but if it's an ongoing bottleneck then it 'should' be scoring high on effective actions.

There is a lot of outside view research on this that could be collected and analyzed.

Comment author: RomeoStevens 25 January 2018 08:33:11PM 3 points [-]

Would you be willing to comment a bit on the search strategies you used to generate this list? I think it would be highly useful.

Comment author: JesseClifton 22 December 2017 05:20:48PM 3 points [-]

I can’t speak for the author, but I don’t think the problem is the difficulty of “approximating” expected value. Indeed, in the context of subjective expected utility theory there is no “true” expected value that we are trying to approximate. There is just whatever falls out of your subjective probabilities and utilities.

I think the worry comes more from wanting subjective probabilities to come from somewhere — for instance, models of the world that have a track-record of predictive success. If your subjective probabilities are not grounded in such a model, as is arguably often the case with EAs trying to optimize complex systems or the long-run future, then it is reasonable to ask why they should carry much epistemic / decision-theoretic weight.

(People who hold this view might not find the usual Dutch book or representation theorem arguments compelling.)

Comment author: RomeoStevens 24 December 2017 07:48:19AM 3 points [-]

I'll second this. In double cruxing EV calcs with others it is clear that they are often quite parameter sensitive and that awareness of such parameter sensitivity is rare/does not come for free. Just the opposite, trying to do sensitivity analysis on what are already fuzzy qualitative->quantitative heuristics is quite stressful/frustrating. Results from sufficiently complex EV calcs usually fall prey to ontology failures, ie key assumptions turned out wrong 25% of the time in studies on analyst performance in the intelligence community, and most scenarios have more than 4 key assumptions.

Comment author: RomeoStevens 24 December 2017 05:37:45AM *  2 points [-]

Wrt the first post. This is the largest update I have had on openphil's chances of successfully funding something transformative. I had previously experienced several large updates in the negative direction. Based on those, my prediction was that the list of grants in this new program would be highly disappointing. Instead, all of them actually seem to be in the correct genre. Things that the outside view say sometimes lead to major breakthroughs. This is in contrast to almost all funding to date which I thought fell within the category of interventions or speed ups in development that might improve things but definitely not lead to breakthroughs. My current model says this sort of thing is taste limited, in the same sense that Paul Graham attributes much of the success of yc to Jessica Livingston's taste in founders. In this case I am claiming that the given grants 'taste right' because they are at least aiming at new methods, which is the genre of improvement most heavily overrepresented in top cited research over the last century. I would have a second significant update in the positive direction if this is part of a bigger strategy of exploring a broader range of search strategies, like piggybacking the NIH was.

Edit: double whammy. Added benefit that this sort of thing is noteworthy, raising awareness for these sorts of strategies:

also made the front page of HN:

Comment author: MikeJohnson 20 November 2017 08:46:51PM *  4 points [-]

This is a wonderful overview. I especially appreciated the notes about possible biases in each study.

My expectation is that the "mental health tech" field is also worth keeping an eye on, although it's often characterized by big claims and not a lot of supporting data. I'm cautiously optimistic that an app like UpLift (Spencer Greenberg et. al) might be able to improve upon existing self-administered CBT options.

There have also been a lot of promising developments in neuroscience and 'applied philosophy of mind', and if there are ways of turning these into technology, it seems plausible we could start to see some "10x results". Better ways to understand what's going on in brains will lead to better tools to fix them when they break.

The two paradigms I find most intriguing here are

  • the predictive coding / free energy paradigm (primary work by Karl Friston, Anil K. Seth, Andy Clark; for a nice summary see SSC's book review of Surfing Uncertainty and 'toward a predictive theory of depression' - also, Adam Safron is an EA who really knows his stuff here, and would be a good person to talk to about how predictive coding models could help inform mental health interventions)

  • the connectome-specific harmonic wave paradigm (primary work by Selen Atasoy; for a nice summary see this video&transcript - this has informed much of QRI's thinking about mental health)

I'd also love to survey other peoples' intuitions on what neuroscience work they think could lead to a '10x breakthrough' in mental health tech.

Comment author: RomeoStevens 20 November 2017 09:23:25PM *  4 points [-]

Two areas I think are most promising off the top of my head (held lightly)

  1. Continuing connectome work with advanced meditators. This kind of research has been ongoing at various institutes for the last decade. It would be nice to get a consistent pipeline of funding to enable less stop-start.

  2. Triaging of people into mental health interventions. By paying too much attention to mean effect size in aggregates of treatment populations we are potentially ignoring large effect sizes in restricted treatment populations. Gathering data on outcome distribution shapes and attempting to do some hypothesis exploration on what hidden features are making certain people high responders to certain interventions could be incredibly high returns.

Comment author: RomeoStevens 05 November 2017 05:53:16AM -1 points [-]

Geographical distance is a kind of inferential distance.

Comment author: Roxanne_Heston  (EA Profile) 02 October 2017 10:14:51AM 0 points [-]

Hm, we haven't considered this in particular, although we are considering alternative funding models. If you think we should prioritize setting something like this up, can you make the case for this over our current scheme or more general certificates of impact?

Comment author: RomeoStevens 03 October 2017 11:49:22PM 0 points [-]

I can't make a case for prioritization as I haven't been able to find enough data points for a reasonable base rate of expectations of effects from the incentive. Fqxi might have non public data on how their program has gone that they might be willing to share with cea. I'd probably also try reaching out to the John Templeton foundation, though they are less likely to engage. It is likely worth a short brainstorm of people who might know more about how prizes typically work out.

View more: Next