Comment author: Dunja 28 February 2018 11:23:45AM 1 point [-]

Oh I haven't seen that publication on their website. If it was a peer-reviewed publication, that would indeed be something (and a kind of stuff I've been looking for). Could you please link to the publication?

Comment author: DominikPeters 23 March 2018 01:47:17PM 0 points [-]
Comment author: Gregory_Lewis 28 February 2018 06:06:00AM *  4 points [-]

Disclosure: I'm both a direct and indirect beneficiary of Open Phil funding. I am also a donor to MIRI, albeit an unorthodox one

[I]f you check MIRI's publications you find not a single journal article since 2015 (or an article published in prestigious AI conference proceedings, for that matter).

I have a 2 year out-of-date rough draft on bibliometrics re. MIRI, which likely won't get updated due to being superceded by Lark's excellent work and other constraints on my time. That said:

My impression of computer science academia was that (unlike most other fields) conference presentations are significantly more important than journal publications. Further when one looks at work on MIRI's page from 2016-2018, I see 2 papers at Uncertainty of AI, which this site suggests is a 'top-tier' conference. (Granted, for one of these neither or the authors have a MIRI institutional affiliation, although 'many people at MIRI' are acknowledged).

Comment author: DominikPeters 28 February 2018 10:26:45AM 3 points [-]

Also, parts of their logical induction paper were published/presented at TARK-2017, which is a reasonable fit for the paper, and a respectable though not a top conference.

Comment author: Milan_Griffes 27 October 2017 01:17:56AM 0 points [-]

Looks cool, but I wasn't able to figure out how to use this after 5 minutes of trying. Have a little guidance?

Comment author: DominikPeters 27 October 2017 07:50:19AM 1 point [-]

Not sure about iTunes/iOS; probably I'd need to submit the podcast to Apple for approval which I don't have enough permission to do :) Maybe there are non-Apple-protected apps? Or switch to Android.

Comment author: Milan_Griffes 26 October 2017 08:31:19PM 1 point [-]

First world problem – I usually listen to podcasts on the iOS podcast app. The EconTalk archives on that app go back to 2015, a lot of the recommended episodes are older.

I can download episodes in my iPhone browser, but the in-browser player only plays at 1x. Turns out I can't stand listening to podcast audio at 1x anymore (needs to be at least 1.5x to be palatable).

Any thoughts about how to access old episodes at high playback speeds on an iPhone?

Comment author: DominikPeters 26 October 2017 09:54:55PM *  1 point [-]

I've made a feed with Wiblin's top 10 episodes for easy importing into podcast apps.

http://bit.ly/econtalk-wiblin

expands to https://dl.getdropbox.com/s/hjdlhtv6xtklhxv/econtalk-wiblin.xml

Comment author: DominikPeters 03 August 2017 08:11:23PM *  3 points [-]

Under certain circumstances, having moral uncertainty over theories that are purely ordinal may lead to the recommendation to split. Example: Suppose there are three charities A,B,C, and four options: donating 100% to one of A, B, C, or splitting the money equally between them (which we will call S). Let's ignore other ways of splitting. Suppose you have equal credence of 33% in three different theories:

1: A > S > B > C

2: B > S > C > A

3: C > S > A > B

Given the ranking over charities, it is rational in something like a von Neumann-Morgenstern sense to rank S second. But with these theories and these credences, one can see that S is the Condorcet winner and it is also the unique Borda winner, so that S would be uniquely recommended by essentially all voting rules, including Borda, the system favoured by Will MacAskill. In this example, contrary to the example in the OP, option S is not Pareto-dominated by another option, so that the unanimity principle does not bite.

This example crucially depends on only having ordinal information available, since with cardinal information (and expected value maximisation) we would never uniquely recommend splitting, as Tom notes, and so I don't think the argument in favour of splitting from moral uncertainty is particularly strong or robust.