WD

Wei Dai

4029 karmaJoined Jun 2015

Posts
7

Sorted by New
9
· 4y ago · 1m read

Comments
227

Materialism is an important trait in individuals, and plausibly could be an important difference between groups. (Certainly the history of the Jewish people attests to the fact that it has been considered important in groups!) But the horrific recent history of false hypotheses about innate Jewish behavior helps us see how scientifically empty and morally bankrupt such ideas really are.

Coincidentally, I recently came across an academic paper that proposed a partial explanation of the current East Asian fertility crisis (e.g., South Korea's fertility decreased from 0.78 to 0.7 in just one year, with 2.1 being replacement level) based on high materialism (which interestingly, the paper suggests is really about status signaling, rather than actual "material" concerns).

The paper did not propose a genetic explanation of this high materialism, but if it did, I would hope that people didn't immediately dismiss it based on similarity to other hypotheses historically or currently misused by anti-Semites. (In other words, the logic of this article seems to lead to absurd conclusions that I can't agree with.)

All this tends sadly to distract people from the fact that when white nationalists like Lynn talk about race/IQ stuff, they are trying to push a political agenda

From my perspective, both sides of this debate are often pushing political agendas. It would be natural, but unvirtuous, to focus our attention on the political agenda of only one side, or to pick sides of an epistemic divide based on which political agenda we like or dislike more. (If I misinterpreted you, please clarify what implications you wanted people to draw from this paragraph.)

Note that Will does say a bit in the interview about why he doesn't view SBF's utilitarian beliefs as a major explanatory factor here (the fraud was so obviously negative EV, and the big lesson he took from the Soltes book on white-collar crime was that such crime tends to be more the result of negligence and self-deception than deliberate, explicit planning to that end).

I disagree with Will a bit here, and think that SBF's utilitarian beliefs probably did contribute significantly to what happened, but perhaps somewhat indirectly, by 1) giving him large scale ambitions, 2) providing a background justifications for being less risk-averse than most, 3) convincing others to trust him more than they otherwise would. Without those beliefs, he may well not have gotten to a position where he started committing large scale fraud through negligence and self-deception.

I basically agree with the lessons Will suggests in the interview, about the importance of better "governance" and institutional guard-rails to disincentivize bad behavior

I'm pretty confused about the nature of morality, but it seems that one historical function of morality is to be a substitute for governance (which is generally difficult and costly; see many societies with poor governance despite near universal desire for better governance). Some credit the success of Western civilization in part to Christian morality, for example. (Again I'm pretty confused and don't know how relevant this is, but it seems worth pointing out.)

I think it would be a big mistake to conflate that sort of "overconfidence in general" with specifically moral confidence (e.g. in the idea that we should fundamentally always prefer better outcomes over worse ones). It's just very obvious that you can have the latter without the former, and it's the former that's the real problem here.

My view is that the two kinds of overconfidence seem to have interacted multiplicatively in causing the disaster that happened. I guess I can see why you might disagree, given your own moral views (conditional on utilitarianism being true/right, it would be surprising if high confidence in it is problematic/dangerous/blameworthy), but my original comment was written more with someone who has relatively low credence in utilitarianism in mind, e.g., Will.

BTW it would be interesting to hear/read a debate between you and Will about utilitarianism. (My views are similar to his in putting a lot of credence on anti-realism and "something nobody has thought of yet", but I feel like his credence for "something like utilitarianism" is too low. I'm curious to understand both why your credence for it is so high, and why his is so low.)

My memory of the podcast (could be wrong, only listened once!) is that Will said that, conditional on error theory being false, his credence in consequentialism, is about 0.5.

I think he meant conditional on error theory being false, and also on not "some moral view we've never thought of".

Here's a quote of what Will said starting at 01:31:21: "But yeah, I tried to work through my credences once and I think I ended up in like 3% in utilitarianism or something like. I mean large factions go to, you know, people often very surprised by this, but large factions go to, you know, to error theory. So there's just no correct moral view. Very large faction to like some moral view we've never thought of. But even within positive moral views, and like 50-50 on non consequentialism or consequentialism, most people are not consequentialists. I don't think I'm."

Overall it seems like Will's moral views are pretty different from SBF's (or what SBF presented to Will as his moral views), so I'm still kind of puzzled about how they interacted with each other.

If future humans were in the driver’s seat instead, but with slightly more control over the process

Why only "slightly" more control? It's surprising to see you say this without giving any reasons or linking to some arguments, as this degree of alignment difficulty seems like a very unusual position that I've never seen anyone argue for before.

The source code was available, but if someone wanted to claim compliance with the NIST standard (in order to sell their product to the federal government, for example), they had to use the pre-compiled executable version.

I guess there's a possibility that someone could verify the executable by setting up an exact duplicate of the build environment and re-compiling from source. I don't remember how much I looked into that possibility, and whether it was infeasible or just inconvenient. (Might have been the former; I seem to recall the linker randomizing some addresses in the binary.) I do know that I never documented a process to recreate the executable and nobody asked.

It’s not clear to me why human vs. AIs would make war more likely to occur than in the human vs. human case, if by assumption the main difference here is that one side is more rational.

We have more empirical evidence that we can look at when it comes to human-human wars, making it easier to have well-calibrated beliefs about chances of winning. When it comes to human-AI wars, we're more likely to have wildly irrational beliefs.

This is just one reason war could occur though. Perhaps a more likely reason is that there won't be a way to maintain the peace, that both sides can be convinced will work, and is sufficiently cheap that the cost doesn't eat up all of the gains from avoiding war. For example, how would the human faction know that if it agrees to peace, the AI faction won't fully dispossess the humans at some future date when it's even more powerful? Even if AIs are able to come up with some workable mechanisms, how would the humans know that it's not just a trick?

Without credible assurances (which seems hard to come by), I think if humans do agree to peace, the most likely outcome is that it does get dispossessed in the not too distant future, either gradually (for example getting scammed/persuaded/blackmailed/stolen from in various ways), or all at once. I think society as a whole won't have a strong incentive to protect humans because they'll be almost pure consumers (not producing much relative to what they consume), and such classes of people are often killed or dispossessed in human history (e.g., landlords after communist takeovers).

I don’t think this follows. Humans presumably also had empathy in e.g. 1500, back when war was more common, so how could it explain our current relative peace?

I mainly mean that without empathy/altruism, we'd probably have even more wars, both now and then.

To the extent that changing human nature explains our current relatively peaceful era, this position seems to require that you believe human nature is fundamentally quite plastic and can be warped over time pretty easily due to cultural changes.

Well, yes, I'm also pretty scared of this. See this post where I talked about something similar. I guess overall I'm still inclined to push for a future where "AI alignment" and "human safety" are both solved, instead of settling for one in which neither is (which I'm tempted to summarize your position as, but I'm not sure if I'm being fair).

What are some failure modes of such an agency for Paul and others to look out for? (I shared one anecdote with him, about how a NIST standard for "crypto modules" made my open source cryptography library less secure, by having a requirement that had the side effect that the library could only be certified as standard-compliant if it was distributed in executable form, forcing people to trust me not to have inserted a backdoor into the executable binary, and then not budging when we tried to get an exception for this requirement.)

I've looked into the game theory of war literature a bit, and my impression is that economists are still pretty confused about war. As you mention, the simplest model predicts that rational agents should prefer negotiated settlements to war, and it seems unsettled what actually causes wars among humans. (People have proposed more complex models incorporating more elements of reality, but AFAIK there isn't a consensus as to which model gives the best explanation of why wars occur.) I think it makes sense to be aware of this literature and its ideas, but there's not a strong argument for deferring to it over one's own ideas or intuitions.

My own thinking is that war between AIs and humans could happen in many ways. One simple (easy to understand) way is that agents will generally refuse a settlement worse than what they think they could obtain on their own (by going to war), so human irrationality could cause a war when e.g. the AI faction thinks it will win with 99% probability, and humans think they could win with 50% probability, so each side demand more of the lightcone (or resources in general) than the other side is willing to grant.

To take this one step further, I would say that given that many deviations from the simplest game theoretic model do predict war, war among consequentialist agents may well be the default in some sense. Also, given that humans often do (or did) go to war with each other, our shared values (i.e. the extent to which we do have empathy/altruism for others) must contribute to the current relative peace in some way.

I was curious why given Will's own moral uncertainty (in this interview he mentioned having only 3% credence in utilitarianism) he wasn't concerned about SBF's high confidence in utilitarianism, but didn't hear the topic addressed. Maybe @William_MacAskill could comment on it here?

One guess is that apparently many young people in EA are "gung ho" on utilitarianism (mentioned by Spencer in this episode), so perhaps Will just thought that SBF isn't unusual in that regard? One lesson could be that such youthful over-enthusiasm is more dangerous than it seems, and EA should do more to warn people about the dangers of too much moral certainty and overconfidence in general.

Load more