E

ESRogs

396 karmaJoined Sep 2014

Posts
1

Sorted by New

Comments
64

I've developed a clean mathematical framework in which possibilities like this can be made precise, the assumptions behind them can be clearly stated, and their value can be compared.

Sorry if I'm missing something (I've only skimmed the paper), but is the "mathematical framework" just the idea of integrating value over time?

I'm quite surprised to see this idea presented as new. Isn't this idea very obvious? Haven't we been thinking this way all along?

Like, how else could you possibly think of the value of the future of humanity? (The other mathematically simple option that comes to mind is to only value some end state and ignore all intermediate value, but that doesn't seem very compelling.)

Again, apologies if I'm missing something, which seems likely. Would appreciate anyone who can fill in the gaps for me if so!

I'm surprised by all the disagree votes on a comment that is primarily a question.

Do all the people who disagreed think it's obvious whether Ben meant while he was working at AR or subsequently? If so, which one?

(I'm guessing the disagree votes were meant to register disagreement with my claim that it's relatively normal for interviewers / employers to tell candidates reasons a job might not be a good for them. Is that it, or something else?)

These people knew about one of the biggest financial frauds in U.S. history but didn't try to stop it

I think you're stretching here. Nowhere in the article does it suggest that the EA leaders actually knew about ongoing fraud.

It just says (as in the quotes you cited), that they'd been warned Sam was shady. That's very different from having actual knowledge of ongoing fraud. If the article wanted to make that claim, I think it would have been more direct about it.

Sam was fine with me telling prospective AR employees why I thought they shouldn’t join (and in fact I did do this)

Didn't quite follow this part. Is this referring to while you were still at AR or subsequently?

If it was while you were still working there, that seems pretty normal. Not every candidate should be sold on the job. Some should be encouraged not to join if it's not going to be a good fit for them. Why would this even be controversial with Sam? Or were you telling them not to join specifically because of criticisms you had of the CEO?

If it was subsequent, how do you know he was fine with it? What would he have done if he wasn't fine with it?

Your summary of the article's thesis doesn't seem right to me:

b. Even though those EAs (including myself) quit before FTX was founded and therefore could not have had any first-hand knowledge of this improper relationship between AR and FTX, they knew things (like information about Sam’s character) which would have enabled them to predict that something bad would happen

c. This information was passed on to “EA leaders”, who did not take enough preventative action and are therefore (partly) responsible for FTX’s collapse

I interpreted the article as arguing more that EA leaders should not have promoted FTX / Sam as a model to look up to, or aligned themselves with him, rather than that they should have somehow prevented the fraud.

The article has this to say about the knowledge and responsibility of early employees:

None of the early Alameda employees who witnessed Bankman-Fried’s behavior years earlier say they anticipated this level of alleged criminal fraud. There was no “smoking gun,” as one put it, that revealed specific examples of lawbreaking. Even if they knew Bankman-Fried was dishonest and unethical, they say, none of them could have foreseen a fraud of this scope.

... which really doesn't sound to me like it's blaming those early employees for what happened. If anything, they come across as the heroes of the story!

In contrast, what seems to be particularly highlighted in the article is Will MacAskill's association with Sam:

Even after Bankman-Fried left the board of CEA, he retained MacAskill’s support, both in public and private. In a 2022 interview on the 80,000 Hours podcast, MacAskill describes himself as “remarkably aligned with Sam,” and said the FTX Future Fund could be a “an enormous inflection point for EA.” FTX advertisements used the language of effective altruism. “I’m on crypto because I want to make the biggest global impact for good,” read one FTX ad, which featured a photo of Bankman-Fried.

So the thesis of the article seems to me more like, "EA leaders (esp. Will) should have known this guy was sketchy and stayed away." rather than "EA leaders should have prevented the FTX fraud."

FWIW, I think such a postmortem should start w/ the manner in which Sam left JS. As far as I'm aware, that was the first sign of any sketchiness, several months before the 2018 Alameda walkout.

Some characteristics apparent at the time:

  • joining CEA as "director of development" which looks like it was a ruse to avoid JS learning about true intentions
  • hiring away young traders who were in JS's pipeline at the time

I believe these were perfectly legal, but to me they look like the first signs that SBF was inclined to:

  • choose the (naive) utilitarian path over the virtuous one
  • risk destroying a common resource (good will / symbiotic relationship between JS and EA) for the sake of a potential prize

These were also the first opportunities I'm aware of that the rest of us had to push back and draw a harder line in favor of virtuous / common-sense ethical behavior.

If we want to analyze what we as a community did wrong, this to me looks like the first place to start.

In the past two years, the technical alignment organisations which have received substantial funding include:

In context it sounds like you're saying that Open Phil funded Anthropic, but as far as I am aware that is simply not true.

I think maybe what you meant to say is that, "These orgs that have gotten substantial funding tend to have ties to Open Phil, whether OP was the funder or not." Might be worth editing the post to make that more explicit, so it's clear whether you're alleging a conflict of interest or not.

I'll limit myself to one (multi-part) follow-up question for now —

Suppose someone in our community decides not to defer to the claimed "scientific consensus" on this issue (which I've seen claimed both ways), and looks into the matter themselves, and, for whatever reason, comes to the opposite conclusion that you do. What advice would you have for this person?

I think this is a relevant question because, based in part on comments and votes, I get the impression that a significant number of people in our community are in this position (maybe more so on the rationalist side?).

Let's assume they try to distinguish between the two senses of "racism" that you mention, and try to treat all people respectfully and fairly. They don't make a point of trumpeting their conclusion, since it's not likely to make people feel good, and is generally not very helpful since we interact with individuals rather than distributions, as you say.

Let's say they also try to examine their own biases and take into account how that might have influenced how they interpreted various claims and pieces of data. But after doing that, their honest assessment is still the same.

Beyond not broadcasting their view, and trying to treat people fairly and respectfully, would you say that they should go further, and pretend not to have reached the conclusion that they did, if it ever comes up?

Would you have any other advice for them, other than maybe something like, "Check your work again. You must have made a mistake. There's an error in your thinking somewhere."?

Thanks, I appreciate the thoughtful response!

Load more