T

tailcalled

52 karmaJoined Nov 2022

Comments
7

Doesn't the $67 billion number cited for capabilities include a substantial amount of work being put into reliability, security, censorship, monitoring, data protection, interpretability, oversight, clean dataset development, alignment method refinement, and security? At least anecdotally the AI work I see at my non-alignment-related job mainly falls under these sorts of things.

Can you give 5 examples of cases where rationalist/EAs should defer more to experts?

It's interesting, I had heard some vague criticism from social justice communities that EA is bad, but at first I had dismissed it. Your review made me look up the book and compare what the book says to how EAs (that is, you) interpret the book. And I've got to say, a lot of the social justice criticism of EA really looks spot-on as critiques of your review. I'd encourage readers to do some epistemic spot checks of this review, as at least when I did so, it didn't seem to fare super well. On the other hand I will probably read the full book when I find the time.

Since A’s and B’s guesses are identically accurate, it seems most sensible to take the average in order to be closest to the truth. And even if you were A or B, if you want to be closest to the truth, you should do the same.

Why not add them together, and declare yourself 90% sure that it is an oak tree?

Or rather, since simply adding them together may get you outside the [0, 1] range, why not convert it to log odds, subtract off the prior to obtain the evidence, and add the evidence together, add back in the prior, and then convert it back to probabilities?

Hm, my understanding is that there is no traditional institution that will issue a "yep this person is good" document that works across contexts, including for e.g. people who work in crypto, so any approval process would require a lot of personal judgement?

That said I don't disagree with the notion of using preexisting approval systems like crime record, my suggestion is more for making sure that one does in fact use them in the correct proportions, and in particular credibly committing to doing so in the future.

I should maybe have been more explicit in stating the actual policy proposal:

I don't think paying back necessarily needs to be done on the level of an individual project/grant. Insofar as the EA community is, well, a community, it might be viable to take responsibility on the level of the community.

For instance, in the discussion I linked to on twitter, the suggestion was that EAs would set up a fund that they could donate to for the victims of FTX.

This would presumably still create lots of community-wide incentives, as well as incentives among the leaders of EA, because nobody wants their community to waste a lot of resources due to having worked with bad actors. But it would also be much less burdensome to individual granttakers.