Comment author: RobBensinger 09 November 2017 02:21:17AM *  0 points [-]

I'm not an expert in this area and haven't seen that study, but I believe Eliezer generally defers to Bryan Caplan's analysis on this topic. Caplan's view, discussed in The Case Against Education (which is scheduled to come out in two months), is that something like 80% of the time students spend in school is signaling, and something like 80% of the financial reward students enjoy from school is due to signaling. So the claim isn't that school does nothing to build human capital, just that a very large chunk of schooling is destroying value.

Comment author: Denkenberger 18 November 2017 06:42:38PM 0 points [-]

Wow - is there a paper to this effect? I would be surprised if it is that high for the technical fields.

Comment author: Denkenberger 18 November 2017 05:23:51PM 2 points [-]

A lot of us are part of the global '99 percent', so to speak.

Did you mean to say 1 percent?

Comment author: DCM 08 November 2017 06:29:04PM *  0 points [-]

Could you speak a little to the adversarial nature of nuclear war adaptation? (Apologies if it’s already discussed and I missed it, I’m quite bad at tracking the maths often used in the EA community.)

As far as I know, a full-scale nuclear exchange is still understood strategically as mutually assured destruction. If agricultural adaptation becomes a confounding factor for MAD, then would this not just increase pressure to increase stockpile sizes/yields, or encourage the use of deadlier alternatives (e.g. cobalt-60 weapons, or the effective equivalent thereof), until MAD is achieved again? It strikes me as a situation somewhat analogous to ICBM shields - in a vacuum it’s a countermeasure to the status quo, but there’s an obvious counter-countermeasure available.

Comment author: Denkenberger 18 November 2017 02:48:10AM 0 points [-]

I didn't mention this issue in this point, but here is an excerpt from a paper:

"Moral hazard in this context refers to the possibility that awareness of a food backup plan will result in less effort to prevent these catastrophes. Nuclear winter is the catastrophe over which humanity has the most technical control and poses the most serious threat. Mikhail Gorbachev explicitly stated that a motivating factor for reducing the nuclear arsenal of the USSR was the studies predicting nuclear winter and therefore destruction outside of the target countries [48]. However, despite the knowledge of the possibility of nuclear winter, the nuclear arsenals remain large enough to potentially cause nuclear winter. Similarly, though there is a clear and present threat of anthropogenic abrupt climate change, little has been done to effectively prevent global climate change [49]. Furthermore, the backup plan presented here could reduce the damages associated catastrophes over which humanity currently has no or very little control (e.g. supervolcanic eruptions). The only cases for which moral hazard appears to be important are the super organisms. Therefore, despite the relatively small moral hazard dilemma, we believe humanity would be much better off with a viable back up plan."

We are addressing more the TAD (total assured destruction) in the quote above. Your question was on the mutual assured destruction, or basically deterrence. In this case, I would argue that despite food backup plans, being able to kill half of your enemies' population is sufficient deterrence (and indeed just having 100 nukes and being able to kill as many people as died in WWII would be enough deterrence in my opinion, and far less nuclear winter risk). ICBM shields have the potential to eliminate deterrence and could make a first strike attractive, so they are more problematic.

Comment author: kbog  (EA Profile) 17 November 2017 04:27:35AM *  0 points [-]

Academia and the media do have a high level of ideological conformity

As far as I can tell this is pretty much false. I've seen lots of ideological diversity in both. Do you have any evidence for your position?

I am not the first person to make this kind of criticism

No, but among people who are actually informed and make this criticism, they don't blindly wave it as a bludgeon against the mass of evidence which doesn't suit their opinions.

Feminism has greatly influenced the present-day understanding of sexual assault and sexual harassment

That would make sense, since feminists are people whose job it is to understand these sorts of things.

If you look at the careers of central feminist legal scholars and researchers, like Catharine MacKinnon and Mary Koss, you will find that they have been incredibly influential

Yes, it seems like they are regarded as experts by large, competent, nonpartisan institutions.

While EAs are working hard to save lives and struggling for mainstream acceptance

EA has very good mainstream acceptance given how new it is.

How come? What has Koss accomplished?

She has done research and advocacy which was regarded as excellent by large, competent organizations.

The work of Koss, MacKinnon, and all the other feminist figures, influences policy from the university, to the workplace, to high schools, to global bodies like the UN and the Hague.

Yes. That's because they thought it was very good. I'm still not sure what your argument is.

Everything you think you know about sexual assault, sexual violence, and sexual harassment actually comes from the tireless influence of feminist legal activism

What? Where did that come from? Mary Koss is an academic psychiatrist. Do you not know the difference between psychiatric research and legal activism?

Regardless of whether you think this perspective is correct or not, it's important to understand the history of where your foundational moral concepts come from

"Our knowledge of gender violence come from a world-renowned psychiatrist." I'm kind of sad that this is the best argument you can give.

Comment author: Denkenberger 17 November 2017 06:04:59PM 3 points [-]

This shows that psychology professors in the US are ~10:1 liberal to conservative, almost as extreme as EA. So I think there are data to show that there is little ideological diversity in academia, especially the humanities, social sciences, and arts.

Comment author: Denkenberger 14 November 2017 10:44:21PM 2 points [-]

Thanks! I would also add that I find it useful to delay reading that is not too difficult to times when I would be less productive, like when I am tired.

Comment author: Denkenberger 08 November 2017 10:57:45PM 2 points [-]

As for the value of college for non-doctors, what about the study of GI bill recipients that were randomly chosen that found that college did have significant causal benefits (it was not just correlation that colleges were just choosing better qualified people)?

Comment author: Benito 01 November 2017 05:38:22AM 5 points [-]

For my own benefit I thought I'd write down examples of markets that I can see are inadequate yet inexploitable. Not all of these I'm sure are actually true, some just fit the pattern.

  • I notice that most charities aren’t cost effective, but if I decide to do better by making a super cost-effective charity I shouldn’t expect to be more successful than the other charities.
  • I notice that most people at university aren’t trying to learn but get good signals for their career, I can’t easily do better in the job market by stopping trying to signal and just learn better
  • I notice most parenting technique books aren't helpful (because genetics), but I probably can’t make money by selling a shorter book that tells you the only parenting techniques that do matter.
  • If I notice that politicians aren’t trying to improve the country very much, I can’t get elected over them by just optimising for improving the country more (because they're optimising for being elected).
  • If most classical musicians spend a lot of money on high-status instruments and spend time with high-status teachers that don’t correlate with quality, you can’t be more successful by just picking high quality instruments and teachers.
  • If most rocket companies are optimising for getting the most money out of government, you probably can’t win government contracts by just making a better rocket company. (?)
  • If I notice that nobody seems to be doing research on the survival of the human species, I probably can’t make it as an academic by making that my focus
  • If I notice that most music recommendation sites are highly reviewing popular music (so that they get advance copies) I can’t have a more successful review site/magazine by just being honest about the music.

Correspondingly, if these models are true, here are groups/individuals who it would be a mistake to infer strong information about if they're not doing well in these markets:

  • Just because a charity has a funding gap doesn't mean it's not very cost-effective
  • Just because someone has bad grades at university doesn't mean they are bad at learning their field
  • Just because a parenting book isn't selling well doesn't mean it isn't more useful than others
  • Just because a politician didn't get elected doesn't mean they wouldn't have made better decisions
  • Just because a rocket company doesn't get a government contract doesn't mean it isn't better at building safe and cheap rockets than other companies Just because an academic is low status / outside academia doesn't mean they're views aren't true
  • Just because a band isn't highly reviewed in major publications doesn't mean it isn't innovative/great

Some of these seem stronger to me than others. I tend to think that academic fields are more adequate at finding truth and useful knowledge than music critics are adequate at figuring out which bands are good.

Comment author: Denkenberger 03 November 2017 05:39:46PM 4 points [-]

As an academic in existential risk, I thought I would comment. In my experience, it is challenging getting interdisciplinary papers published, which is why I think it would be great if someone started an interdisciplinary existential/global catastrophic risk journal. But I would say that mentions of "global catastrophic risk" and "existential risk" appeared to be growing about 40% per year when I tried to analyze Google scholar. This growth is good for citations, and my paper citations have not been bad.

Comment author: Denkenberger 03 November 2017 05:34:47PM 1 point [-]

For the $10/life, were you referring to this? A solution to the low prestige, low citation, but important research is a doubly altruistic researcher who is willing to work for no money and few citations. By the way, I haven't been able to find data, but I think most research is unfunded. This is certainly true in the humanities, but even in STEM, my experience is that most grad students outside the top 50 U.S. universities are unfunded. And professors at colleges with no grad students are many times expected to produce research, typically unfunded.

Comment author: Denkenberger 03 November 2017 05:20:50PM 2 points [-]

What do you think of neglectedness popping up in Owen's model when he was not trying to produce it? And his general logarithmic returns? I do agree with you that even if the cause area is not neglected, there could be cost effective interventions, as I argue here. But I would still say that within interventions, neglectedness is an important indicator of cost effectiveness.

Comment author: Michael_S 30 October 2017 02:02:22PM 4 points [-]

Really exciting work! This seems like an intervention that could potentially be funded with public resources more easily than AI safety research could, which opens up another avenue to funding.

I see how this could be very useful in the event of a nuclear war, but I do have some skepticism about how useful these alternative foods wold be for a less severe shortage. With a 10% reduction in agricultural productivity, why do you think alternative foods that don't need sunlight could be cheaper than simply expanding how much of useable land we devote to agriculture/using land to grow products that are cheaper per calorie?

Comment author: Denkenberger 31 October 2017 03:09:26PM 2 points [-]

As for the funding part of your comment, it is true that the agricultural risks are more mainstream than AI. We have been pursuing public resources (e.g. grants). However, I think EAs with their willingness to change their minds and openness to expected value calculations are ideal candidates to recognize the value of this early on and help get it off the ground.

View more: Next