Comment author: Emanuele_Ascani 14 May 2018 04:05:02PM 1 point [-]

Interesting! How did you arrive at the $1,000/yr figure?

Comment author: oge 16 May 2018 07:12:40AM 1 point [-]

That's about the total annual cost of preserving a brain and spinal cord under an Alcor cryonics contract. I assume that the price paid while the patient are alive are roughly the same as the cost of preservation when dead.

Comment author: Emanuele_Ascani 13 May 2018 10:15:24AM *  3 points [-]

I second this. Research in the area of cryonics could be an effective intervention, but proposing it in this way achieves nothing, since it doesn't do the actual work of assessing its impact per dollar. It doesn't even try.

Comment author: oge 14 May 2018 10:03:57AM 1 point [-]

I estimate it'll cost at least $1,000/yr to preserve a brain. That's about the cost of maintaining a family at global poverty levels.

I should have posted such calculations first before posting the excerpts. Thanks for your comments.

Comment author: Khorton 11 May 2018 04:23:58PM 1 point [-]

This story did not make me a more effective or altruistic person, as far as I can tell.

Comment author: oge 12 May 2018 01:43:06PM 0 points [-]

I posted the story to let folks know of a possible altruistic target: letting people live as long as they want by vitrifying their nervous systems for eventual resuscitation.

Comment author: turchin 11 May 2018 03:09:57PM *  1 point [-]

To become part of EA, cryonics must become cheap, and to become cheap, it should be, imho, pure chemical fixation without cooling, which could cost only a few dollars per brain, something like aldehyde fixation without cryopreservation.

Comment author: oge 12 May 2018 01:36:55PM 1 point [-]

Pure chemical fixation without cooling would be ideal. The extra cryopreservation step is necessary since glutaraldehyde only fixes tissue for months rather than centuries.

Comment author: gworley3  (EA Profile) 23 February 2018 07:14:55PM 1 point [-]

I think the ELI5 on AI alignment is the same as it has been: make nice AI. Being a little more specific I like Russell's slightly more precise formulation of this as "align AI with human values", and being even more specific (without jumping to mathematical notation), I'd say we want to design AI that value what humans value and for us to believe these AI share our values.

Maybe the key thing I'm trying to get at though is that alignable AI will be phenomenally conscious, or in ELI5 terms as much people as anything else (humans, animals, etc.). So then my position is not just "make nice AI" but "make nice AI people we can believe are nice".

Comment author: oge 26 February 2018 07:01:26PM 0 points [-]

Thanks, Gordon.

"Make nice AI people we can believe are nice" makes sense to me; I hadn't been aware of the "...we can believe are nice" requirement.

Comment author: oge 22 February 2018 07:46:55PM 4 points [-]

Thank you for providing an abstract for your article. I found it very helpful.

(and I wish more authors here would do so as well)

Comment author: oge 22 February 2018 07:43:44PM 1 point [-]

Hi Gordon, I don't have accounts on LW or Medium so I'll comment on your original post here.

If possible, could you explain like I'm five what your working definition of the AI alignment problem is?

I find it hard to prioritize causes that I don't understand in simple terms.

Comment author: oge 22 February 2018 06:45:51PM 1 point [-]

Small nit: the links in the table of contents lead to a Google Doc, rather than to the body of the article.

Other than that, I love the article. Thanks for the giant disclaimer ;)

Comment author: oge 06 November 2017 02:59:27PM 1 point [-]

Hi Joey, how can one apply for Charity Science's tech lead position? The link on your jobs page just goes to a Github repo.

Comment author: oge 28 September 2016 09:02:51AM 2 points [-]

FYI I applied to New Incentives with ~10 yrs experience on the 1st of September. Haven't heard back.

View more: Next