Comment author: Robert_Wiblin 16 December 2015 05:46:54PM 3 points [-]

Look, no doubt the argument has been made by people in the past, including Bostrom who wrote it up for consideration as a counterargument. I do think the 'astronomical waste' argument should be considered, and it's far from obvious that 'this is a Pascal's Mugging' is enough to overcome its strength.

But it's also not the main reason, only reason, or best reason, many people who work on these problems could ground their choice to do so.

So if you dismiss this argument, before you dismiss the work, move on to look at what you think is the strongest argument, not the weakest.

Comment author: pappubahry 17 December 2015 01:14:17AM 0 points [-]

If I were debating you on the topic, it would be wrong to say that you think it's a Pascal's mugging. But I read your post as being a commentary on the broader public debate over AI risk research, trying to shift it away from "tiny probability of gigantic benefit" in the way that you (and others) have tried to shift perceptions of EA as a whole or the focus of 80k. And in that broader debate, Bostrom gets cited repeatedly as the respectable, mainstream academic who puts the subject on a solid intellectual footing.

(This is in contrast to MIRI, which as SIAI was utterly woeful and which in its current incarnation still didn't look like a research institute worthy of the name when I last checked in during the great Tumblr debate of 2014; maybe they're better now, I don't know.)

In that context, you'll have to keep politely telling people that you think the case is stronger than the position your most prominent academic supporter argues from, because the "Pascal's mugging" thing isn't going to disappear from the public debate.

Comment author: Robert_Wiblin 16 December 2015 01:55:41PM 0 points [-]

I asked Bostrom about this and he said he never even made this argument in this way to the journalist. Given my experience the the media misrepresenting everything you say and wanting to put sexy ideas into their pieces, I believe him.

Comment author: pappubahry 16 December 2015 02:37:24PM 5 points [-]

The New Yorker writer got it straight out of this paper of Bostrom's (paragraph starting "Even if we use the most conservative of these estimates"). I've seen a couple of people report that Bostrom made a similar argument at EA Global.

Comment author: pappubahry 15 December 2015 02:32:37PM 3 points [-]

I get what you're saying, but, e.g., in the recent profile of Nick Bostrom in the New Yorker:

No matter how improbable extinction may be, Bostrom argues, its consequences are near-infinitely bad; thus, even the tiniest step toward reducing the chance that it will happen is near-­infinitely valuable. At times, he uses arithmetical sketches to illustrate this point. Imagining one of his utopian scenarios—trillions of digital minds thriving across the cosmos—he reasons that, if there is even a one-per-cent chance of this happening, the expected value of reducing an existential threat by a billionth of a billionth of one per cent would be worth a hundred billion times the value of a billion present-day lives. Put more simply: he believes that his work could dwarf the moral importance of anything else.

While the most prominent advocate in the respectable-academic part of that side of the debate is making Pascal-like arguments, there's going to be some pushback about Pascal's mugging.

Comment author: pappubahry 21 November 2015 04:16:20AM 1 point [-]

I confess I'm a bit surprised no one else has linked this yet

Judging by GiveWell's Twitter and Facebook feeds, the post is mis-dated -- it only went live about 8 hours ago (at time of writing my comment), rather than 2 or 3 days ago.

Comment author: tomstocker 14 May 2015 07:51:07AM 1 point [-]

?

Comment author: pappubahry 14 May 2015 11:43:41AM 3 points [-]

I think this is referring to a common probability question, e.g., example 3 here.

6

EA Survey bar chart plotter

I've made a bar chart plotter for the EA survey results (including only data from respondents who said they could be called an EA), which lets you choose the categories to plot.  In addition to the survey being unrepresentative of the broader EA community in unknown ways (see, e.g., the discussion ... Read More
Comment author: Peter_Hurford  (EA Profile) 23 March 2015 07:14:14PM 0 points [-]

Woah, this is an impressive data viz accomplishment! You should make it a top-level post -- it's cooler than a comment. :)

-

Also, ...

I [...] converted all currencies to USD, independently of the survey team

How did you do that so quickly? We had to pay $60 to get it done manually via virtual assistants.

Comment author: pappubahry 24 March 2015 12:40:28AM *  1 point [-]

Thanks Peter! I'll make the top-level post later today.

How did you do that so quickly?

(I might have given the impression that I did this all during a weekend. This isn't quite right -- I spent 2-3 evenings, about 8 hours in total, going from the raw csv files to nice and compact .js function. Then I wrote the plotter on the weekend.)

I did this bit in Excel. If the money amounts were in column A, I insert three columns to the right: B for the currency (assumed USD unless otherwise specified), C for the min of the range given, D for the max. In column C, I started with =IF(ISNUMBER(A2), A2, "") and dragged that formula down the column. Then I went through line by line, reading off any text entries, and turning them into currency/min/max (if a single value was reported, I entered it as the min, and left the max blank). currency, tab, number, enter, currency, tab, number, tab, number, enter, currency, tab...

It's not a fun way to spend an evening (hence why I didn't do the lifetime donations as well), but it doesn't actually take that long.

Then: new column E for the AVERAGE(C2:D2) dragged down the column. Then I typed in the average currency conversions for 2013 into a new sheet and did a lookup (most users would use VLOOKUP I think, I used MATCH and OFFSET) to get my final USD numbers in column F.

Also, do you have the GitHub code for your plotter?

As a fierce partisan of the "finalreally2" school of source control, I'm yet to learn how to GitHub. You can view the Javascript source easily enough though, and save it locally. (I suggest deleting the Google Analytics "i,s,o,g,r,a,m" script if you do this, or your browser might go looking for Google in your file system for a few seconds before plotting the graphs). The two scripts not in the HTML file itself are d3.min.js and easurvey.data.js. (EDIT: can't be bothered fixing the markdown underscores here.)

A zip file with my ready-to-run CSV file and the R script to turn it into a Javascript function is here.

Comment author: pappubahry 23 March 2015 08:19:23AM 4 points [-]

I've made a bar chart plotter thing with the survey data: link.

Comment author: pappubahry 17 March 2015 10:41:05AM 3 points [-]

The first 17 entries in imdata.csv have some mixed-up columns, starting (at latest) from

Have you volunteered or worked for any of the following organisations? [Machine Intelligence Research Institute]

until (at least)

Over 2013, which charities did you donate to? [Against Malaria Foundation].

Some of this I can work out (volunteering at "6-10 friends" should obviously be in the friends column), but the blank cells under the AMF donations have me puzzled.

Comment author: pappubahry 17 March 2015 10:16:35AM 5 points [-]

Thanks for this, and thanks for putting the full data on github. I'll have a sift through it tonight and see how far I get towards processing it all (perhaps I'll decide it's too messy and I'll just be grateful for the results in the report!).

I have one specific comment so far: on page 12 of the PDF you have rationality as the third-highest-ranking cause. This was surprisingly high to me. The table in imdata.csv has it as "Improving rationality or science", which is grouping together two very different things. (I am strongly in favour of improving science, such as with open data, a culture of sharing lab secrets and code, etc.; I'm pretty indifferent to CFAR-style rationality.)

View more: Next