Regulators should review the 2014 DeepMind acquisition. When Google bought DeepMind in 2014, no regulator, not the FTC, not the EC's DG COMP, nor the CMA, scrutinized the impact. Why? AI startups have high value but low revenues. And so they avoid regulation (...
Concerns over AI safety and calls for government control over the technology are highly correlated but they should not be.
There are two major forms of AI risk: misuse and misalignment. Misuse risks come from humans using AIs as tools in dangerous ways. Misalignment risks...
This post correctly identifies some of the major obstacles to governing AI, but ultimately makes an argument for "by default, governments will not regulate AI well," rather than the claim implied by its title, which is that advocating for (specific) AI regulations is net negative -- a type of fallacious conflation I recognize all too well from my own libertarian past.
Disclaimer: While I criticize several EA critics in this article, I am myself on the EA-skeptical side of things (especially on AI risk).
I am a proud critic of effective altruism, and in particular a critic of AI existential risk, but I have to admit that a ...
Strongly agree. I think there's also a motivation gap in knowledge acquisition. If you don't think there's much promise in an idea or a movement, it usually doesn't make sense to spend years learning about it. This leads to large numbers of very good academics writing poorly-informed criticisms. But this shouldn't be taken to indicate that there's nothing behind the criticisms. It's just that it doesn't pay off career-wise for these people to spend years learning enough to press the criticisms better.
Anders Sandberg has written a “final report” released simultaneously with the announcement of FHI’s closure. The abstract and an excerpt follow.
...Normally manifestos are written first, and then hopefully stimulate actors to implement their vision. This document is the reverse
I don't think the global optimal solution is an EA forum that's a cuddly little safe space for me.
I agree with this, but also think the forum "not being cuddly for Sean" and "not driving contributors away" aren't mutually exclusive. Maybe I am not seeing all the tradeoffs though.
Consider donating all or most of your Mana on Manifold to charity before May 1.
Manifold is making multiple changes to the way Manifold works. You can read their announcement here. The main reason for donating now is that Mana will be devalued from the current 1 USD:100 ...
This post is easily the weirdest thing I've ever written. I also consider it the best I've ever written - I hope you give it a chance. If you're not sold by the first section, you can safely skip the rest.
Imagine an alternate version of the Effective Altruism movement,...
Crosspost of my blog.
You shouldn’t eat animals in normal circumstances. That much is, in my view, quite thoroughly obvious. Animals undergo cruel, hellish conditions that we’d confidently describe as torture if they were inflicted on a human (or even a dog). No hamburger...
Thanks for tagging me, Johannes! I have not read the post, but in my mind one should overwhelmingly focus on minimising animal suffering in the context of food consumption. I estimate the harm caused by the annual food consumption of a random person is 159 times that caused by their annual GHG emissions.
Fig. 4 of Kuruc 2023 is relevant to the question. A welfare weight of 0.05 means that one values 0.05 units of welfare in humans as much as 1 unit of welfare in animals, and it would still require a social cost of carbon of over 7 k$/t for prioritising beed...
Many thanks to Andrew Snyder-Beattie and Joshua Monrad for their feedback during this project. This project was completed as part of contract work with Open Philanthropy, but the views and work expressed here do not represent those of Open Philanthropy. All thoughts are...
Thank you so much for flagging this! Very much agreed this is an important correction; the update that the US doesn't dominate the biosecurity spend this way is indeed important and I think a welcome one. Will certainly amend.
I see way too many people confusing movement with progress in the policy space.
There can be a lot of drafts becoming bills with still significant room for regulatory capture in the specifics, which will be decided later on. Take risk levels, for instance, which are subjective - lots of legal leeway for companies to exploit.
Huh, fwiw I thought this proposal would increase AI risk, since it would increase competitive dynamics (and generally make coordinating on slowing down harder). I at least didn't read this post as x-risk motivated (though I admit I was confused what it's primary motivation was).