WD

Wei Dai

4063 karmaJoined Jun 2015

Posts
7

Sorted by New
9
· 4y ago · 1m read

Comments
232

I want to note that within a few minutes of posting the parent comment, it received 3 downvotes totaling -14 (I think they were something like -4, -5, -5, i.e., probably all strong downvotes) with no agreement or disagreement votes, and subsequently received 5 upvotes spread over 20 hours (with no further downvotes AFAIK) that brought the net karma up to 16 as of this writing. Agreement/disagreement is currently 3/1.

This pattern of voting seems suspicious (e.g., why were all the downvotes clustered so closely in time). I reported the initial cluster of downvotes to the mods in case they want to look into it, but have not heard back from them yet. Thought I'd note this publicly in case a similar thing happened or happens to anyone else.

I think too much moral certainty doesn't necessarily cause someone to be dangerous by itself, and there has to be other elements to their personality or beliefs. For example lots of people are or were unreasonably certain about divine command theory, but only a minority of them caused much harm (e.g. by being involved in crusades and inquisitions). I'm not sure it has much to do with realism vs non-realism though. I can definitely imagine some anti-realist (e.g., one with strong negative utilitarian beliefs) causing a lot of damage if they were put in certain positions.

Uncertainty can transition to increased certainty later on, as people do more thinking. So, it doesn’t feel like a stable solution.

This seems like a fair point. I can think of some responses. Under realism (or if humans specifically tend to converge under reflection) people would tend to converge to similar values as they think more, so increased certainty should be less problematic. Under other metaethical alternatives, one might hope that as we mature overall in our philosophies and social systems, we'd be able to better handle divergent values through compromise/cooperation.

(Not to mention that, as EAs tell themselves it’s virtuous to remain uncertain, this impedes philosophical progress at the level of individuals.)

Yeah, there is perhaps a background disagreement between us, where I tend to think there's little opportunity to make large amounts of genuine philosophical progress without doing much more cognitive work (i.e., to thoroughly explore the huge space of possible ideas/arguments/counterarguments), making your concern not significant for me in the near term.

It's entirely possible that I misinterpreted David. I asked for clarification from David in the original comment if that was the case, but he hasn't responded so far. If you want to offer your own interpretation, I'd be happy to hear it out.

I'm saying that you can't determine the truth about an aspect of reality (in this case, what cause group differences in IQ), when both sides of a debate over it are pushing political agendas, by looking at which political agenda is better. (I also think one side of it is not as benign as you think, but that's besides the point.)

I actually don't think this IQ debate is one that EAs should get involved in, and said as much to Ives Parr. But if people practice or advocate for what seem to me like bad epistemic norms, I feel an obligation to push back on that.

More specifically, you don't need to talk about what causes group differences in IQ to make a consequentialist case for genetic enhancement, since there is no direct connection between what causes existing differences and what the best interventions are. So one possible way forward is just to directly compare the cost-effectiveness of different ways of raising intelligence.

Materialism is an important trait in individuals, and plausibly could be an important difference between groups. (Certainly the history of the Jewish people attests to the fact that it has been considered important in groups!) But the horrific recent history of false hypotheses about innate Jewish behavior helps us see how scientifically empty and morally bankrupt such ideas really are.

Coincidentally, I recently came across an academic paper that proposed a partial explanation of the current East Asian fertility crisis (e.g., South Korea's fertility decreased from 0.78 to 0.7 in just one year, with 2.1 being replacement level) based on high materialism (which interestingly, the paper suggests is really about status signaling, rather than actual "material" concerns).

The paper did not propose a genetic explanation of this high materialism, but if it did, I would hope that people didn't immediately dismiss it based on similarity to other hypotheses historically or currently misused by anti-Semites. (In other words, the logic of this article seems to lead to absurd conclusions that I can't agree with.)

All this tends sadly to distract people from the fact that when white nationalists like Lynn talk about race/IQ stuff, they are trying to push a political agenda

From my perspective, both sides of this debate are often pushing political agendas. It would be natural, but unvirtuous, to focus our attention on the political agenda of only one side, or to pick sides of an epistemic divide based on which political agenda we like or dislike more. (If I misinterpreted you, please clarify what implications you wanted people to draw from this paragraph.)

Note that Will does say a bit in the interview about why he doesn't view SBF's utilitarian beliefs as a major explanatory factor here (the fraud was so obviously negative EV, and the big lesson he took from the Soltes book on white-collar crime was that such crime tends to be more the result of negligence and self-deception than deliberate, explicit planning to that end).

I disagree with Will a bit here, and think that SBF's utilitarian beliefs probably did contribute significantly to what happened, but perhaps somewhat indirectly, by 1) giving him large scale ambitions, 2) providing a background justifications for being less risk-averse than most, 3) convincing others to trust him more than they otherwise would. Without those beliefs, he may well not have gotten to a position where he started committing large scale fraud through negligence and self-deception.

I basically agree with the lessons Will suggests in the interview, about the importance of better "governance" and institutional guard-rails to disincentivize bad behavior

I'm pretty confused about the nature of morality, but it seems that one historical function of morality is to be a substitute for governance (which is generally difficult and costly; see many societies with poor governance despite near universal desire for better governance). Some credit the success of Western civilization in part to Christian morality, for example. (Again I'm pretty confused and don't know how relevant this is, but it seems worth pointing out.)

I think it would be a big mistake to conflate that sort of "overconfidence in general" with specifically moral confidence (e.g. in the idea that we should fundamentally always prefer better outcomes over worse ones). It's just very obvious that you can have the latter without the former, and it's the former that's the real problem here.

My view is that the two kinds of overconfidence seem to have interacted multiplicatively in causing the disaster that happened. I guess I can see why you might disagree, given your own moral views (conditional on utilitarianism being true/right, it would be surprising if high confidence in it is problematic/dangerous/blameworthy), but my original comment was written more with someone who has relatively low credence in utilitarianism in mind, e.g., Will.

BTW it would be interesting to hear/read a debate between you and Will about utilitarianism. (My views are similar to his in putting a lot of credence on anti-realism and "something nobody has thought of yet", but I feel like his credence for "something like utilitarianism" is too low. I'm curious to understand both why your credence for it is so high, and why his is so low.)

My memory of the podcast (could be wrong, only listened once!) is that Will said that, conditional on error theory being false, his credence in consequentialism, is about 0.5.

I think he meant conditional on error theory being false, and also on not "some moral view we've never thought of".

Here's a quote of what Will said starting at 01:31:21: "But yeah, I tried to work through my credences once and I think I ended up in like 3% in utilitarianism or something like. I mean large factions go to, you know, people often very surprised by this, but large factions go to, you know, to error theory. So there's just no correct moral view. Very large faction to like some moral view we've never thought of. But even within positive moral views, and like 50-50 on non consequentialism or consequentialism, most people are not consequentialists. I don't think I'm."

Overall it seems like Will's moral views are pretty different from SBF's (or what SBF presented to Will as his moral views), so I'm still kind of puzzled about how they interacted with each other.

If future humans were in the driver’s seat instead, but with slightly more control over the process

Why only "slightly" more control? It's surprising to see you say this without giving any reasons or linking to some arguments, as this degree of alignment difficulty seems like a very unusual position that I've never seen anyone argue for before.

The source code was available, but if someone wanted to claim compliance with the NIST standard (in order to sell their product to the federal government, for example), they had to use the pre-compiled executable version.

I guess there's a possibility that someone could verify the executable by setting up an exact duplicate of the build environment and re-compiling from source. I don't remember how much I looked into that possibility, and whether it was infeasible or just inconvenient. (Might have been the former; I seem to recall the linker randomizing some addresses in the binary.) I do know that I never documented a process to recreate the executable and nobody asked.

Load more