I’m Emma from the Communications team at the Centre for Effective Altruism (CEA). I want to flag a few media items related to EA that have come out recently or will be coming out soon, given they’ll touch on topics—like FTX—that I expect will be of interest to Forum readers...
In theory of mind, the question of how to define an "individual" is complicated. If you're not familiar with this area of philosophy, see Wait But Why's introduction.
I think most people in EA circles subscribe to the computational theory of mind, which means that...
If you were being tortured, and I created a copy of you being tortured identically, this seems horrible (all else equal). I don't see why it would matter any less, let alone somewhat less or, as implied here, not at all.
And if a copy of you were to be tortured in mental states X in the future, then it wouldn't be bad for you to be tortured in mental states X now. Or, you have to consider only simultaneous states or within some bands of time or discount some other way.
If you've read Leif's WIRED article or Poverty is No Pond & have questions for him, I'd love to share them with him & in turn share his answers here.
Thank you, M, for sharing this with me & encouraging me to connect.
I thought he spelled out his ETG criticism quite clearly in the article, so I’ll paraphrase what I imbibed here.
I think he would argue that, for the same person in the same job, donating X% of their money is a better thing. However, the ETG ethos that has hung around in the community promotes seeking out extremely high-paying jobs in order to donate even more money. These jobs often bring about more harms in turn (both in an absolute sense but possibly also to the point that ETG is net-negative, for example in the case of SBF), especially if we live in an economic system that rewards behaviour that profits off negative externalities.
This post was cross-posted from the substack Thing of Things with the permission of the author.
In defense of trying things out
The Economist recently published an article, “How poor Kenyans became economists’ guinea pigs,” which critiques development economists’ use of randomized...
Share your information in this thread if you are looking for full-time, part-time, or limited project work in EA causes[1]!
We’d like to help people in EA find impactful work, so we’ve set up this thread, and another called Who's hiring? (we did this last in 2022[2]).
Consider...
TLDR: If you're an EA-minded animal funder donating $200K/year or more, we'd love to connect with you about several exciting initiatives that AIM is launching over the next several months.
AIM (formerly Charity Entrepreneurship) has a history of incubating and supporting...
Most of these are just "people in space knew this wouldn't work". Could you share more specific criticisms? As Aidan said, the biggest successes come from projects no one else would do, so without more information that seems like a very weak criticism.
As the Soviet Union collapsed in 1991, the fate of its weapons of mass destruction (WMD) programs presented a new type of catastrophic risk: what would happen to all the nuclear, biological, and chemical weapons and materials, and the scientists who worked on them? The nuclear weapons were distributed across what were about to become four separate countries (Belarus, Kazakhstan, Russia, and Ukraine). Plus, the thousands of experts in those weapons, many of whom went unpaid for months at a time as the Soviet economy collapsed, could be easily tempted to sell information to, or even work directly for, states who were then seeking to build out WMD programs such as Iran and North Korea.
But, by the end of the decade, Belarus, Kazakhstan, and Ukraine had agreed to dismantle or return all their nuclear weapons to Russia[1] and joined the Treaty on the Non-Proliferation of Nuclear Weapons...
I have heard rumours that an AI Safety documentary is being made. Separate to this, a good friend of mine is also seriously considering making one, but he isn't "in" AI Safety. If you know who this first group is and can put me in touch with them, it might be worth getting across each others plans.
LessOnline is a festival celebrating truth-seeking, optimization, and blogging. It's an opportunity to meet people you've only ever known by their LessWrong username or Substack handle.
We're running a rationalist conference!
The ticket cost is $400 minus your LW karma in cents.
Confirmed attendees include Scott Alexander, Eliezer Yudkowsky, Katja Grace, and Alexander Wales.
Go through to Less.Online to learn about who's attending, venue, location, housing, relation to Manifest, and more.
We'll post more updates about this event over the coming weeks as it all comes together.
...If LessOnline is an awesome rationalist event,
I desire to believe that LessOnline is an awesome rationalist event;
If LessOnline is not an awesome rationalist event,
I desire to believe that LessOnline is not an awesome rationalist event;
Let me not become attached
If EA currently
then I imagine that it would be prett... (read more)