Comment author: Denkenberger 18 November 2017 06:42:38PM 0 points [-]

Wow - is there a paper to this effect? I would be surprised if it is that high for the technical fields.

Comment author: Benito 18 November 2017 07:32:27PM 0 points [-]

I haven't read Caplan's book, but I can imagine >50% of the math learned in a math course being not used in a technical career outside of research, and furthermore that the heuristics picked up in those courses are not generalisable (e.g. geometry heuristics not applying to differential equations).

Comment author: Benito 01 November 2017 05:38:22AM 5 points [-]

For my own benefit I thought I'd write down examples of markets that I can see are inadequate yet inexploitable. Not all of these I'm sure are actually true, some just fit the pattern.

  • I notice that most charities aren’t cost effective, but if I decide to do better by making a super cost-effective charity I shouldn’t expect to be more successful than the other charities.
  • I notice that most people at university aren’t trying to learn but get good signals for their career, I can’t easily do better in the job market by stopping trying to signal and just learn better
  • I notice most parenting technique books aren't helpful (because genetics), but I probably can’t make money by selling a shorter book that tells you the only parenting techniques that do matter.
  • If I notice that politicians aren’t trying to improve the country very much, I can’t get elected over them by just optimising for improving the country more (because they're optimising for being elected).
  • If most classical musicians spend a lot of money on high-status instruments and spend time with high-status teachers that don’t correlate with quality, you can’t be more successful by just picking high quality instruments and teachers.
  • If most rocket companies are optimising for getting the most money out of government, you probably can’t win government contracts by just making a better rocket company. (?)
  • If I notice that nobody seems to be doing research on the survival of the human species, I probably can’t make it as an academic by making that my focus
  • If I notice that most music recommendation sites are highly reviewing popular music (so that they get advance copies) I can’t have a more successful review site/magazine by just being honest about the music.

Correspondingly, if these models are true, here are groups/individuals who it would be a mistake to infer strong information about if they're not doing well in these markets:

  • Just because a charity has a funding gap doesn't mean it's not very cost-effective
  • Just because someone has bad grades at university doesn't mean they are bad at learning their field
  • Just because a parenting book isn't selling well doesn't mean it isn't more useful than others
  • Just because a politician didn't get elected doesn't mean they wouldn't have made better decisions
  • Just because a rocket company doesn't get a government contract doesn't mean it isn't better at building safe and cheap rockets than other companies Just because an academic is low status / outside academia doesn't mean they're views aren't true
  • Just because a band isn't highly reviewed in major publications doesn't mean it isn't innovative/great

Some of these seem stronger to me than others. I tend to think that academic fields are more adequate at finding truth and useful knowledge than music critics are adequate at figuring out which bands are good.

Comment author: Pablo_Stafforini 31 October 2017 08:39:17PM *  0 points [-]

why in general should we presume groups of people with academic qualifications have their strongest incentives towards truth?

Maybe because these people have been surprisingly accurate? In addition, it's not that Eliezer disputes that general presumption: he routinely relies on results in the natural and social sciences without feeling the need to justify in each case why we should trust e.g. computer scientists, economists, neuroscientists, game theorists, and so on.

Comment author: Benito 31 October 2017 09:18:06PM 0 points [-]

Yeah, that’s the sort of discussion that seems to me most relevant.

Comment author: Pablo_Stafforini 31 October 2017 02:46:42PM *  2 points [-]

A discussion about the merits of each of the views Eliezer holds on these issues would itself exemplify the immodest approach I'm here criticizing. What you would need to do to change my mind is to show me why Eliezer is justified in giving so little weight to the views of each of those expert communities, in a way that doesn't itself take a position on the issue by relying primarily on the inside view.

Let’s consider a concrete example. When challenged to justify his extremely high confidence in MWI, despite the absence of a strong consensus among physicists, Eliezer tells people to "read the QM sequence”. But suppose I read the sequence and become persuaded. So what? Physicists are just as divided now as they were before I raised the challenge. By hypothesis, Eliezer was unjustified in being so confident in MWI despite the fact that it seemed to him that this interpretation was correct, because the relevant experts did not share that subjective impression. If upon reading the sequence I come to agree with Eliezer, that just puts me in the same epistemic predicament as Eliezer was originally: just like him, I too need to justify the decision to rely on my own impressions instead of deferring to expert opinion.

To persuade me, Greg, and other skeptics, what Eliezer needs to do is to persuade the physicists. Short of that, he can persuade a small random sample of members of this expert class. If, upon being exposed to the relevant sequence, a representative group of quantum physicists change their views significantly in Eliezer’s direction, this would be good evidence that the larger population of physicists would update similarly after reading those writings. Has Eliezer try to do this?

ETA: I just realized that the kind of challenge I'm raising here has been carried out, in the form of a "natural experiment", for Eliezer's views on decision theory. Years ago, David Chalmers spontaneously sent half a dozen leading decision theorists copies of Eliezer's TDT paper. If memory serves, Chalmers reported that none of these experts had been impressed (let alone persuaded).

Comment author: Benito 31 October 2017 07:37:44PM 3 points [-]

A discussion about the merits of each of the views Eliezer holds on these issues would itself exemplify the immodest approach I'm here criticizing. What you would need to do to change my mind is to show me why Eliezer is justified in giving so little weight to the views of each of those expert communities, in a way that doesn't itself take a position on the issue by relying primarily on the inside view.

This seems correct. I just noticed you could phrase this the other way - why in general should we presume groups of people with academic qualifications have their strongest incentives towards truth? I agree that this disagreement will come down to building detailed models of incentives in human organisations more than building inside views of each field (which is why I didn't find Greg's post particularly persuasive - this isn't a matter of discussing rational bayesian agents, but of discussing the empirical incentive landscape we are in).

Comment author: Michael_PJ 14 October 2017 09:41:40PM 4 points [-]

Thanks for this, detailed post-mortems like this are very valuable!

Some thoughts:

  1. I considered getting involved in the project, but was somewhat put off by the messaging. Somehow it came across as a "learning exercise for students" rather than "attempt to do actually new research". Not sure exactly why that was (the grant size may have been a part, see below), and I now regret not getting more involved.

  2. You describe the grant amount of £10,000 as "substantial". This is surprising to me, since my reaction to the grant size was that it was too small to bother with. I think this corroborates your thoughts about grant size: any size of grant would have had most of the beneficial effects that you saw, but a much larger grant would have been needed to make it seem really "serious".

  3. I think that the project goal was too ambitious. Global prioritization is much harder than more restricted prioritization, but also vaguer and more abstract. Usually when we're learning to deal with vague and abstract problems we start out by becoming very adept with simple, concrete versions to build skills and intuitions before moving up the abstraction hierarchy (easier, better feedback, more motivating, etc.). If I wanted to train up some prioritization researchers I would probably start by getting them to just do lots of small, concrete prioritization tasks.

  4. As Michael Plant says below, I think the project was in a bit of an awkward middle ground. The costs of participation (in terms of work and "top-of-mind" time) were perhaps a bit too high for either students or otherwise-busy community members (like myself), and the perceived benefits (in terms of expected quality of research produced) were perhaps too low for the professionals. (To elaborate on why engaging felt like it would be substantial work for me: in order to provide good commentary on one of your posts, I would have had to: read the post; probably read some prior posts; think hard about it; possibly do some research myself; condense that into a thoughtful reply. That could easily take up an evening of my time, for not a huge perceived reward.) I think your suggestion of running such a project as a week-long retreat is a good idea - it would get a committed block of time from people, and prevents inefficiencies due to repeated time spent "re-loading" the background information.

  5. Agree that quantitative modelling is great and under-utilised. I think a course which was more or less How To Measure Anything applied to EA with modern techniques and technologies would be a fantastic starter for prioritization research, and give people generally useful skills too.

  6. I would have preferred less, higher-quality output from the project. My reaction to the first few blog posts was that they were fine but not terribly interesting, which meant I largely didn't read much of the rest of the content until the models started appearing, which I did find interesting.

  7. Even if you think the project was net-negative, I hope this doesn't put you off starting new things. Exploration is very valuable, even if the median case is a failure.

Comment author: Benito 14 October 2017 11:07:28PM *  2 points [-]

I think a course which was more or less How To Measure Anything applied to EA with modern techniques and technologies would be a fantastic starter for prioritization research, and give people generally useful skills too.

Just want to strongly agree with this. Those are real figure-out-how-the-world-works skills. If anyone wants an overview, Luke Muehlhauser did an in-depth summary here.

Even if you think the project was net-negative, I hope this doesn't put you off starting new things. Exploration is very valuable, even if the median case is a failure.

Further agreement. Seeing this failed project report is one of the few signs to me that EA is actually trying. I have a vague recollection of Charity Science doing a failed project report too.

Comment author: MichaelPlant 13 October 2017 10:57:38AM *  9 points [-]

Hello Tom, thanks very much for this write up. Three comments:

I very much admire your ability to self-criticise, but I think you're being overly harsh on yourself. It didn't turn out as well as you hoped, but you couldn't have known that in advance, which was the point. I think this is a good example of what is sometimes called 'hits-based charity': EAs trying new things with a high expected value but a low probability of success. I also hesitate to call this a failure because, as you noted, quite a few lessons were learnt. I think your (only?) substantial mistake was in having too high expectations about what a part-time student group could achieve. Perhaps you took "EAs", who are typically smart, consciousness and driven as your reference group, rather than "student club/society" which no one really expects to be very productive or world-changing.

On reflection, I wonder if OxPrio fell into a sort of research no-man's land. It was too detailed for student, average EAs to engage with, but maybe not in depth enough to attract critical commentary and engagement from full-time researchers, such as those in CEA or GiveWell, whose research you were, to some extent, replicating. I'm not sure who you thought the target audience of your research was.

I think a contributing factor to not having much local, Oxford university engagement is that you'd selected a team. Presumably the people who would be most interested in OxPrio's research applied. I imagine many of the people who applied, but you rejected from the team, then decided that, as a standard psychological reflex, that they didn't want to be involved further (disclaimer: I applied and was rejected, but ended up being really curious about what was OxPrio were doing anyway). Hence the process of selecting alienated much of your intended audience. I don't have suggestion for what would have been better, I just think this is worth factoring in.

Comment author: Benito 13 October 2017 02:40:11PM *  1 point [-]

I will echo the conclusion of this, in that OxPrio was likely a counterfactually net positive way to spend your time. Actually running a real team project with a deadline and things depending on you, learning basic management and realising the difference between how you expect a group of people to behave and how they actually behave, are rare life lessons that many people don't learn, or at least not until much older.

Comment author: itaibn 08 September 2017 12:38:24AM 0 points [-]

What do you mean by Feynman? I endorse his Lectures in Physics as something that had a big effect on my own intellectual development, but I worry many people won't be able to get that much out of it. While his more accessible works are good, I don't rate them as highly.

Comment author: Benito 08 September 2017 09:24:04PM 0 points [-]

"Surely You're Joking Mr Feynman" still shows genuine curiosity, which is rare and valuable. But as I say, it's less about whether I can argue for it, and more about whether the top intellectual contributors in our community found it transformative in their youth. I think many may have read Feynman when young (e.g. it had a big impact on Eliezer).

Comment author: Benito 05 September 2017 09:19:44PM 6 points [-]

I don't think the idea Anna suggests is to pick books you think young people should read, but to actually ask the best people what books they read that influenced them a lot.

Things that come to my mind include GEB, HPMOR, The Phantom Tolbooth, Feynman. Also, which surprises me but is empirically true for many people, Sam Harris's "The Moral Landscape" seems to have been the first book a number of top people I know read on their journey to doing useful things.

But either way I'd want more empirical data.

Comment author: Kerry_Vaughan 07 July 2017 05:45:21AM 19 points [-]

This was the most illuminating piece on MIRIs work and on AI Safety in general that I've read in some time. Thank you for publishing it.

Comment author: Benito 07 July 2017 05:57:42AM *  8 points [-]

Agreed! It was nice to see the clear output of someone who had spent a lot of time and effort into a good-faith understanding of the situation.

I was really happy with the layout of four key factors, this will help me have more clarity in further discussions.

Comment author: kierangreig 28 June 2017 03:55:57PM *  8 points [-]

(1) To what degree did your beliefs about the consciousness of insects (if insects are too broad a category please just focus on the common fruit fly) change from completing this report and what were the main reasons for those beliefs changing? I would be particularly interested in an answer that covers the following three points: (i) the rough probability that you previously assigned to them being conscious, (ii) the rough probability that you now assign to them being conscious and (iii) the main reasons for the change in that probability.

(2) Do you assign a 0% probability to electrons being conscious?

(3) In section 5.1 you write

I’d like to get more feedback on this report from long-time “consciousness experts” of various kinds. (So far, the only long-time “consciousness expert” from which I’ve gotten extensive feedback is David Chalmers.)

David Chalmers seems like an interesting choice for the one long-time “consciousness expert” to receive extensive feedback from. Why was he the only one that you got extensive feedback from? And of the other consciousness experts that you would like to receive extensive feedback from, do you think that most of them would disagree with some part of the report in a similar way, and if you think they would, what would that disagreement or those disagreements be?

(4) A while ago Carl Shulman put out this document detailing research advice. Can you please do the same, or if you already have a document like this can you please point me to it? I would probably find it useful and I would guess some others would too.

Comment author: Benito 28 June 2017 06:03:50PM 2 points [-]

(Meta: It might be more helpful to submit individual questions as separate comments, so that people can up vote them separately and people's favourite questions (and associated answers) can rise to the top.)

View more: Next