I’m Emma from the Communications team at the Centre for Effective Altruism (CEA). I want to flag a few media items related to EA that have come out recently or will be coming out soon, given they’ll touch on topics—like FTX—that I expect will be of interest to Forum readers...
TLDR: If you're an EA-minded animal funder donating $200K/year or more, we'd love to connect with you about several exciting initiatives that AIM is launching over the next several months.
AIM (formerly Charity Entrepreneurship) has a history of incubating and supporting...
Most of these are just "people in space knew this wouldn't work". Could you share more specific criticisms? As Aidan said, the biggest successes come from projects no one else would do, so without more information that seems like a very weak criticism.
As the Soviet Union collapsed in 1991, the fate of its weapons of mass destruction (WMD) programs presented a new type of catastrophic risk: what would happen to all the nuclear, biological, and chemical weapons and materials, and the scientists who worked on them? The nuclear weapons were distributed across what were about to become four separate countries (Belarus, Kazakhstan, Russia, and Ukraine). Plus, the thousands of experts in those weapons, many of whom went unpaid for months at a time as the Soviet economy collapsed, could be easily tempted to sell information to, or even work directly for, states who were then seeking to build out WMD programs such as Iran and North Korea.
But, by the end of the decade, Belarus, Kazakhstan, and Ukraine had agreed to dismantle or return all their nuclear weapons to Russia[1] and joined the Treaty on the Non-Proliferation of Nuclear Weapons...
I have heard rumours that an AI Safety documentary is being made. Separate to this, a good friend of mine is also seriously considering making one, but he isn't "in" AI Safety. If you know who this first group is and can put me in touch with them, it might be worth getting across each others plans.
[GIF] A feature I'd love on the forum: while posts are read back to you, the part of the text that is being read is highlighted. This exists on Naturalreaders.com and would love to see it here (great for people who have wandering minds like me)
For what it's worth, I would find the first part of the issue (i.e. making the player "floating" or "sticky") already quite useful, and it seems much easier to implement.
In theory of mind, the question of how to define an "individual" is complicated. If you're not familiar with this area of philosophy, see Wait But Why's introduction.
I think most people in EA circles subscribe to the computational theory of mind, which means that any computing device is able to instantiate a sentient being. (In the simplest case, by simply simulating a physical brain in sufficient detail.)
Computationalism does not, on its own, solve the identity problem. If two computers are running the exact same simulation of a person, is destroying one of them equivalent to killing a person, even though there's a backup? What about just turning it off, capable of being turned on later? These are moral questions, not factual ones, and intuitions differ.
Treating each simulation as its own separate moral patient runs into problems once the substrate is taken into account. Consider...
LessOnline is a festival celebrating truth-seeking, optimization, and blogging. It's an opportunity to meet people you've only ever known by their LessWrong username or Substack handle.
We're running a rationalist conference!
The ticket cost is $400 minus your LW karma in cents.
Confirmed attendees include Scott Alexander, Eliezer Yudkowsky, Katja Grace, and Alexander Wales.
Go through to Less.Online to learn about who's attending, venue, location, housing, relation to Manifest, and more.
We'll post more updates about this event over the coming weeks as it all comes together.
...If LessOnline is an awesome rationalist event,
I desire to believe that LessOnline is an awesome rationalist event;
If LessOnline is not an awesome rationalist event,
I desire to believe that LessOnline is not an awesome rationalist event;
Let me not become attached
From the linked report:
We think it’s good that people are asking hard questions about the AI landscape and the incentives faced by different participants in the policy discussion, including us. We’d also like to see a broader range of organizations and funders getting involved in this area, and we are actively working to help more funders engage.
Here's a story I recently heard from someone I trust:
An AI Safety project got their grant application approved by OpenPhil, but still had more room for funding. After OpenPhil promised them a grant but before...
This post was cross-posted from the substack Thing of Things with the permission of the author.
In defense of trying things out
The Economist recently published an article, “How poor Kenyans became economists’ guinea pigs,” which critiques development economists’ use of randomized...
It's wild for a news organisation that routinely witnesses and reports on tragedies without intervening (as is standard journalistic practice, for good reason) to not recognise it when someone else does it.
This is a link-post from the CGD Blog. For the original post and downloadable Note please visit: https://www.cgdev.org/publication/1-trillion-paradox-why-reforming-research-publishing-should-be-global-priority .
----
Our research system is a perplexing paradox. Each year,...
Possibly an infohazard, but would donating to sci-hub be the most cost-effective way to tackle this problem? Piracy had a massive effect on the cost structures of the entertainment industry; even if it didn't remove the big players here it would force them to lower prices. (Moving to preprint servers is hard, given the way major journals control status in established fields).
The only other way out I see is regulation, and lobbying the EU in particular should be somewhat effective (they're pro-regulation, they govern a significant part of this industry, they already previously supported then rescinded Plan S).
Interesting! I see it under “Opinions” on their homepage when I check now—maybe was something to do with them refreshing the content on the page at a certain point today when you checked, or something else.