Comment author: Paul_Crowley 25 February 2017 09:05:30PM 2 points [-]

Nitpick: "England" here probably wants to be something like "the south-east of England". There's not a lot you could do from Newcastle that you couldn't do from Stockholm; you need to be within travel distance of Oxford, Cambridge, or London.

Comment author: AlyssaVance 04 December 2016 10:57:57PM 2 points [-]

"In general, without a counterfactual in the background all criticism is meaningless"

This seems like a kind of crazy assertion to me. Eg., in 1945, as part of the war against Japan, the US firebombed dozens of Japanese cities, killing hundreds of thousands of civilians. (The bombs were intentionally designed to set cities on fire.) Not being a general or historian, I don't have an exact plan in mind for an alternative way for the past US to have spent its military resources. Maybe, if you researched all the options in enough detail, there really was no better alternative. But it seems entirely reasonable to say that the firebombing was bad, and to argue that (if you were around back then) people should maybe think about not doing that. (The firebombing is obviously not comparable to the pledge, I'm just arguing the general principle here.)

"This is only half-true. I pledged 20%."

The statement was that the pledge recommended 10%, which is true. Of course other people can choose to do other things, but that seems irrelevant.

"Citation needed?"

The exact numbers aren't important here, but the US federal budget is $3.8 trillion, and the US also has a great deal of influence over both private money and foreign money (through regulations, treaties, precedent, diplomatic pressure, etc.). There are three branches of government, of which Congress is one; Congress has two houses, and there are then 435 representatives in the lower house. Much of the money flow was committed a long time ago (eg. Social Security), and would be very hard to change; on the other hand, a law you pass may keep operating and directing money decades into the future. Averaged over everything, I think you get ~$1 billion a year of total influence, order-of-magnitude; 0.1% of that is $1 million, or 57x the $17,400 personal donation. This is fairly conservative, as it basically assumes that all you're doing is appropriating federal dollars to GiveDirectly or something closely equivalent; there are probably lots of cleverer options.

"But if your time really is so incredibly valuable, then you shouldn't spend time doing this yourself, you outsource it to people you trust"

The orders of magnitude here aren't even comparable. This might reduce the net cost to your effectiveness from 5% to 2%, or something like that; it's not going to reduce it to 0.0001%, or whatever the number would have to be for the math to work out.

"However, in principle you should still give your money, just not your time."

In practice, there is always some trade-off between money and time (eg. here discusses this, along with lots of other sites). The rate varies depending on who you are, what you're doing, the type of time you're trading off against, etc. But again, it's not going to vary by the orders of magnitude you seem to implicitly assume.

"Indeed, as you point out towards the end of your piece, this is basically Givewell's initial model."

The initial GiveWell audience was mostly trading off against personal leisure time; that obviously isn't the case here.

"the marginal effort is roughly 0"

It seems extremely implausible that someone making a middle-class salary, or someone making an upper-middle-class salary but under very high time pressure and with high expenses, could give away 10% of their income for life and literally never think about it again.

"If we're not including students, what's your source for thinking effective altruists are 'low income'? Low relative to what?"

Relative to their overall expected career paths. In upper-middle-class and upper-class career tracks (finance, law, business management, entrepreneurship, etc.), income is very back-weighted, with the large majority of expected income coming during the later years of the career.

"You used two examples from (a) GWWC's page itself and (b) CEA updates, GWWC's parent organisation, and used this to conclude the other metrics don't exist?"

I can't prove a negative. If they do exist, where are they? If you link to some, I'll happily add them to the post, as I did for 80K's metrics.

"What metrics do you think 80k/REG/EAF/FHI/CSER/MIRI/CFAR/Givewell/any-other-EA-org are using?"

The GWWC pledge count is used as a metric for EA as a whole, rather than for any specific org like MIRI, CFAR, etc. (Also, AFAIK, many of the orgs mentioned don't really even have internal metrics, except things like "total annual budget" that aren't really measures of efficacy.)

"And we know that CEA is aware of the possible issues with being too focused on this to the exclusion of all else because they said exactly that here* on the forum."

That's cool, but as far as I know, these metrics don't yet exist. If they do exist, great, I'll link them here.

"I don't see how that difference is going to be on a par with your Nevada/Alaska comparison"

The important difference isn't the donation amounts (at least for that example). The important differences are a) this is a public commitment, while most GiveWell-influenced donations are private; b) the commitment is made all at once, rather than year-by-year; c) the commitment is the same income fraction for every year, rather than being adjustable on-the-fly; d) the standard deviation of income for pledgers is almost certainly much higher than for GiveWell's initial audience; e) the standard deviation of human capital is higher; f) the standard deviation of amount-of-free-time is higher; g) pledgers now have very different, and much higher-variance, ideas about "the most good" than a typical GiveWell donor in 2009 (though this is somewhat of an "accident of sociology" rather than intrinsic to the pledge itself).

"I didn't really follow the argument being made here; how does the second point follow from the first?"

There's a selection effect where pledge-takers are much less likely to be the type of people who'd be turned off by donating to a "weird" charity, taking a "weird" career, etc., since people like that would probably not pledge in the first place.

Comment author: Paul_Crowley 05 December 2016 12:09:50AM 5 points [-]

You have a philosopher's instinct to reach for the most extreme example, but in general I recommend against that.

There's a pretty simple counterfactual: don't take or promote the pledge.

Comment author: AnnaSalamon 01 December 2016 08:38:56AM *  15 points [-]

I suspect it’s worth forming an explicit model of how much work “should” be understandable by what kinds of parties at what stage in scientific research.

To summarize my own take:

It seems to me that research moves down a pathway from (1) "totally inarticulate glimmer in the mind of a single researcher" to (2) "half-verbal intuition one can share with a few officemates, or others with very similar prejudices" to (3) "thingy that many in a field bother to read, and most find somewhat interesting, but that there's still no agreement about the value of" to (4) "clear, explicitly statable work whose value is universally recognized valuable within its field". (At each stage, a good chunk of work falls away as a mirage.)

In "The Structure of Scientific Revolutions", Thomas Kuhn argues that fields begin in a "preparadigm" state in which nobody's work gets past (3). (He gives a bunch of historical examples that seem to meet this pattern.)

Kuhn’s claim seems right to me, and AI Safety work seems to me to be in a "preparadigm" state in that there is no work past stage (3) now. (Paul's work is perhaps closest, but there is are still important unknowns / disagreement about foundations, whether it'll work out, etc.)

It seems to me one needs epistemic humility more in a preparadigm state, because, in such states, the correct perspective is in an important sense just not discovered yet. One has guesses, but the guesses cannot be established in common as established knowledge.

It also seems to me that the work of getting from (3) to (4) (or from 1 or 2 to 3, for that matter) is hard, that moving along this spectrum requires technical research (it basically is a core research activity), and one shouldn't be surprised if it sometimes takes years -- even in cases where the research is good. (This seems to me to also be true in e.g. math departments, but to be extra hard in preparadigm fields.)

(Disclaimer: I'm on the MIRI board, and I worked at MIRI from 2008-2012, but I'm speaking only for myself here.)

Comment author: Paul_Crowley 02 December 2016 01:29:47AM 3 points [-]

I went to a MIRI workshop on decision theory last year. I came away with an understanding of a lot of points of how MIRI approaches these things that I'd have a very hard time writing up. In particular, at the end of the workshop I promised to write up the "Pi-maximising agent" idea and how it plays into MIRI's thinking. I can describe this at a party fairly easily, but I get completely lost trying to turn it into a writeup. I don't remember other things quite as well (eg "playing chicken with the Universe") but they have the same feel. An awful lot of what MIRI knows seems to me folklore like this.

Comment author: shlevy 24 October 2016 02:39:40PM 21 points [-]

Note: I am socially peripheral to EA-the-community and philosophically distant from EA-the-intellectual-movement; salt according to taste.

While I understand the motivation behind it, and applaud this sort of approach in general, I think this post and much of the public discussion I've seen around Gleb are charitable and systematic in excess of reasonable caution.

My first introduction to Gleb was Jeff's August post, read before there were any comments up, and it seemed very clear that he was acting in bad faith and trying to use community norms of particular communication styles, owning up to mistakes, openness to feedback, etc. to disarm those engaging honestly and enable the con to go on longer. I don't think I'm an especially untrusting person (quite the opposite, really), but even if that's the case nearly every subsequent revealed detail and interaction confirmed this. Gleb responds to criticism he can't successfully evade by addressing it in only the most literal and superficial manner, and continues on as before. It is to the point that if I were Gleb, and had somehow honestly stumbled this many times and fell into this pattern over and over, I would feel I had to withdraw on the grounds that no one external to my own thought processes could possibly reasonably take me seriously and that I clearly had a lot of self-improvement to do before engaging in a community like this in the future.

The responses to this behavior that I've seen are overwhelmingly of the form of taking Gleb seriously, giving him the benefit of the doubt where none should exist, providing feedback in good faith, and responding positively to the superficial signs Gleb gives of understanding. This is true even for people who I know have engaged with him before. I'm not completely confident of this, but the pattern looks like people are applying the standards of charity and forgiveness that would be appropriate for any one of these incidences in isolation, not taking into account that the overall pattern of behavior makes such charitable interpretations increasingly implausible. On top of that, some seem to have formed clear final opinions that Gleb is not acting in good faith, yet still use very cautious language and are hesitant to take a single step beyond what they can incontrovertibly demonstrate to third parties.

A few examples from this post, not trying to be comprehensive:

  • Using the word "concerns" in the title and introductory matter
  • noting that Gleb doesn't "appear" to have altered his practices around name-dropping
  • Saying "Tsipursky either genuinely believed posts like the above do not ask for upvotes, or he believed statements that are misleading on common-sense interpretation are acceptable providing they are arguably 'true' on some tendentious reading." without bringing up the possibility of him knowing exactly what he's doing and just lying
  • Calling Gleb's self-proclaimed bestselling author status only "potentially" misleading.

Moreover, the fully comprehensive nature of the post and the painstaking lengths it goes to separate out definitely valid issues from potentially invalid ones seems to be part of the same pattern. No one, not even Gleb, is claiming that these instances didn't happen or that he is being set up, yet this post seems to be taking on a standard appropriate for an adversarial court of law.

And this is a problem, because in addition to wasting people's time it causes people less aware of these issues to take Gleb more seriously, encourages him to continue behaving as he has been, and I suspect in some cases inclines even the more knowledgeable people involved to trust Gleb too much in the future, despite whatever private opinions they may have of his reliability. At some point there needs to be a way for people to say "no, this is enough, we are done with you" in the face of bad behavior; in this case if that is happening at all it is being communicated behind-the-scenes or by people silently failing to engage. That makes it much harder for the community as a whole to respond appropriately.

Comment author: Paul_Crowley 24 October 2016 03:24:04PM 18 points [-]

I think being too nice is a failure mode worth worrying about, and your points are well taken. On the other hand, it seems plausible to me that it does a more effective job of convincing the reader that Gleb is bad news precisely by demonstrating that this is the picture you get when all reasonable charity is extended.

Comment author: oagr 13 August 2016 03:11:37AM 1 point [-]

Not having the group photo.

The group photo always seems to take 20 minutes or so. It's kind of fun, but times the number of participants, (1k?), that's ~300 hours, or around $10k of value. Is it worth it? I'm skeptical, but could see it.

Comment author: Paul_Crowley 13 August 2016 03:54:42AM 2 points [-]

I strongly suspect that the group photo is of very high value in getting people to go, making them feel good about having gone, and making others feel good about the conference. However, it sounds like trying to optimize to shave a few minutes off would be pretty high value.

Comment author: Larks 25 January 2016 03:46:03AM 1 point [-]

David Friedman addressing an instance of this issue:

One puzzling feature of rights as we observe them is the degree to which the same conclusions seem to follow from very different assumptions. Thus roughly similar structures of rights can be and are deduced by libertarian philosophers trying to show what set of natural rights is just and by economists trying to show what set of legal rules would be efficient. And the structures of rights that they deduce seem similar to those observed in human behavior and embodied in the common law. In Part III of this essay I will try to suggest at least partial explanations for this triple coincidence—the apparent similarity between what is, what is just, and what is efficient."

Comment author: Paul_Crowley 25 January 2016 11:02:36AM 2 points [-]

What is remarkable about this, of course, is the recognition of the need to address it.

Comment author: kbog  (EA Profile) 16 December 2015 06:00:56AM 1 point [-]

Yes! Thank you for this. Pascal's Muggings have to posit paranormal/supernatural mechanisms to work. But x-risk isn't like that. Big difference which people seem to overlook. And Pascal's Muggings involve many orders of magnitude smaller chances than even the most pessimistic x-risk outlooks.

Comment author: Paul_Crowley 16 December 2015 08:42:02AM 0 points [-]

I agree with your second point but not your first. Also it's possible you mean "optimistic" in your second point: if x-risks themselves are very small, that's one way for the change in probability as a result of our actions to be very small.

Comment author: Paul_Crowley 25 November 2015 01:56:57PM 3 points [-]

Where the survey says 2014, do you mean 2015?

Comment author: kbog  (EA Profile) 20 September 2015 06:47:01PM *  3 points [-]

The difference in size between a human brain and a rat brain is significant. An average adult human brain is 1300-1400g and the average rat brain is 2g. There's no reason to peg the capability of the latter to generate vivid mental states as within the same order of magnitude, or two orders of magnitude in my opinion, of capability as the former.

The brain structures that make humans happy look similar to the brain structures that make rats happy.

Yes, but one is much larger and more powerful than the other.

Rats behave in similar ways to humans in response to pleasurable or painful stimuli.

So do all sorts of non-conscious entities.

Most of the parts of the human brain that other animals don't possess have to do with high-level cognitive processing, which doesn't seem to have much to do with happiness or suffering.

But the difference in size and capacity is altogether too large to be handwaved in this way. Besides, many components of human happiness do depend on higher level cognitive processing. What constitutes brute pain is simple, but what makes someone truly satisfied and grateful for their life is not.

Comment author: Paul_Crowley 23 September 2015 11:11:07AM 1 point [-]

Yes, I'd treat the ratio of brain masses as a lower bound on the ratio of moral patient-ness.

Comment author: Paul_Crowley 15 August 2015 07:12:58AM 0 points [-]

Tax complicates this. If I'm in a higher tax band than you, I can make a donation to charity more cheaply than you can, so you will "receive" more than I "give", and vice versa.

View more: Next