Brian Lui

248 karmaJoined Working (15+ years)Retired

Bio

I used to work in finance. I am interested in effective altruism. Highest impact for minimum effort is good.

Comments
22

One of the quotes is:

Effective altruism swung toward AI safety. “There was this pressure,” says a former member of the community, who spoke on condition of anonymity for fear of reprisals. “If you were smart enough or quantitative enough, you would drop everything you were doing and work on AI safety.”

I think the implication here is that if you are working on global poverty or animal welfare, you must not be smart enough or quantitative enough. I'm not deeply involved so I don't know if this quote is accurate or not.

I think "respectable" is kind of a loaded term that gives longtermism a slightly negative connotation. I feel like a more accurate term would be how "galaxy brain" the cause area is - how much effort and time do you need to explain it to a regular person, or what percentage of normal people would be receptive to a pitch.

The base rate I have in mind is that FTX had access to a gusher of easy money, run by young energetic people with minimal oversight and a limited usage of formalized hiring systems. That produced a situation where top management's opinion was the critical factor in who got promoted or hired into influential positions. The more that other EA organizations resemble FTX, the stronger I would think this.

A few months ago I would have easily agreed with "the view that EA employers are so fragile as to deny job opportunities based on EA Forum hot takes is hopefully greatly exaggerated and very disturbing if not."

However, then I read about the hiring practices at FTX, and significantly updated on this. It's now hard for me to believe that at least some EA employers would not deny job opportunities based on EA forum hot takes!

Thank you, this is a great example of longtermism thinking working out, that would have been unlikely to happen without it!

What do you think would be a good way to word it?

One of the ideas is that longtermism probably does not increase the EV of decisions made for future people. Another is that we increase the EV of future people as a side effect of normal doing things. The third is that increasing the EV of future people is something we should care about.

If all of these are true, then it should be true that we don't need longtermism, I think?

Agreed, "probability discounting" is the most accurate term for this. Also, I struck out the part about Cleopatra in the original post, now that I understand the point behind it!

I just found this forum post which is talking about the same ballpark of things! Mostly agree with the forum post too.

Effective Altruism Movements in the past could have a wide range of results. For example, the Fabian Society might be an example of a positive impact. In the same time period, Communism would be another output of such a movement.

I think past performance is generally indicative of future results. Unless you have a good reason to think that 'this time is different', and you have a thesis for why the differences will lead to a materially changed outcome, it's better to use the past as the base case.

Point is, I think people have always tended to be significantly more right than wrong about how to change the world. It's not too too hard to understand how one person's actions might contribute to an overriding global goal. The problem is in the choice of such an overriding paradigm. The first paradigm was that the world was stagnant/repetitive/decaying and just a prelude to the afterlife. The second paradigm was that the world is progressing and things will only get steadily better via science and reason. Today we largely reject both these paradigms, and instead we have a view of precarity - that an incredibly good future is in sight but only if we proceed with caution, wisdom, good institutions and luck. And I think the deepest risk is not that we are unable to understand how to make our civilization more cautious and wise, but that this whole paradigm ends up being wrong.

 

I like this description of your viewpoint a lot! The entire paradigm for "good outcomes" may be wrong. And we are unlikely to be aware of our paradigm due to "fish in water" perspective problems.

Load more