This is a special post for quick takes by Nathan_Barnard. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
Sorted by Click to highlight new quick takes since:

Is there anyone doing research from an EA perspective on the impact of Nigeria becoming a great power by the end of this century? Nigeria is projected to be the 3rd largest country in the world by 2100 and potentially appears to be having exponential growth in GDP per capita. I'm not claiming that it's a likely outcome that Nigeria is a great power in 2100 but nor does it seem impossible. It isn't clear to me that Nigeria has dramatically worse institutions than India but I expect India to be a great power by 2100. It seems like it'd be really valuable for someone to do some work on this given it seems really neglected.  

I don't know, but I think it would be great to look into.

There was a proposal to make a "Rising Powers" or "BRICS" tag, but the community was most interested in making one for China. I'd like to see more discussion of other rising powers, including the other BRICS countries.

I agree! I think there's some issue here (don't know if there's a word for it) where maybe some critical mass of effort on foreign powers is focused on china, leaving other countries with a big deficit or something. I'm not sure what the solution here is, perhaps other than to make some kind of "the case for becoming a [country X] specialist" for a bunch of potentially influential countries.

Yeah that sounds right, I don't even know how many people are working on strategy based around India becoming a superpower, which seems completely plausible. 

Maybe this isn't something people on the forum do, but it is something I've heard some EAs suggest. People often have a problem when they become EAs that they now believe this really strange thing that potentially is quite core to their identity now and that can feel quite isolating. A suggestion that I've heard is that people should find new, EA friends to solve this problem. It is extremely important that this does not come off as saying that people should cut ties with friends and family who aren't EAs. It is extremely important that this is not what you mean. It would be deeply unhealthy for us as community is this became common. 

Two books I recommend on structural causes and solutions to global poverty. The bottom billion by Paul Collier focuses on the question how can you get failed and failing states in very poor countries to middle income status and has a particular focus on civil war. It also looks at some solutions and thinks about the second order effects of aid. How Asia works by Joe Studwell focus on the question of how can you get poor countries with high quality, or potentially high quality governance and reasonably good political economy to become high income countries. It focuses exclusively on the Asian developmental state model and compares it with neoliberalish models in other parts of Asia that are now mostly middle income countries. 

I think empirical claims can be discriminatory. I was struggling with how to think about this for a while,  but I think I've come to two conclusions. The first way I think that empirical claims can be discrimory is if they express discriminatory claims with no evidence, and people refusing to change their beliefs based on evidence.  I think the other way that they can be discriminatory is when talking about the definitions of socially constructed concepts where we can, in some sense and in some contexts, decide what is true. 

If preference utilitarianism is correct there may be no utility function that accurately describes the true value of things. This will be the case if people's preferences aren't continuous or aren't complete, for instance if they're expressed as a vector. This generalises to other forms of consequentialism that don't have a utility function baked in. 

What do you mean by correct?

When you say "this generalizes to other forms of consequentialism that don't have a utility function baked in", what does "this" refer to? Is it the statement: "there may be no utility function that accurately describes the true value of things" ?

Do the "forms of consequentialism that don't have a utility function baked in" ever intend to have a fully accurate utility function?

A 6 line argument for AGI risk 

(1) Sufficient intelligence has capitalities that are ultimately limited by physics and computability  

(2) An AGI could be sufficiently intelligent that it's limited by physics and computability but humans can't be 

(3) An AGI will come into existence

(4)  If the AGIs goals aren't the same as humans, human goals will only be met for instrumental reasons and the AGIs goals will be met

(5) Meeting human goals won't be instrumentally useful in the long run for an unaligned AGI

(6) It is more morally valuable for human goals to be met than an AGIs goals

Curated and popular this week
Relevant opportunities