Ben Millwood

2531 karmaJoined Dec 2015

Comments
307

Topic contributions
1

I guess I think of caring about future people as the core of longtermism, so if you're already signed up to that, I would already call you a longtermist? I think most people aren't signed up for that, though.

Maybe? This depends on what you think about the probability that intelligent life re-evolves on earth (it seems likely to me) and how good you feel about the next intelligent species on earth vs humans.

Yeah, it seems possible to be longtermist but not think that human extinction entails loss of all hope, but extinction still seems more important to the longtermist than the neartermist.

IMO, most x-risk from AI probably doesn't come from literal human extinction but instead AI systems acquiring most of the control over long run resources while some/most/all humans survive, but fair enough.

valid. I guess longtermists and neartermists will also feel quite different about this fate.

Perhaps we did not emphasise enough the simple point "never commit a crime". As I said in the previous point, there have been extensive warnings against naive “ends justify the means” thinking from many leaders (MacAskill, Ord, Karnofsky, CEA Guiding Principles, 80,000 Hours career advice, etc).

Nevertheless, we could do even more, for example in 80,000 Hours resources or career/student groups, to emphasise this point. There didn’t seem to be much explicit “don’t ever commit a crime” warnings (I assume because this should have been so blindingly obvious to any reasonable or moral person).

There are many immoral laws in the world, particularly but not exclusively if you look outside Europe and the US, e.g. EAs living in countries where homosexuality is illegal should, I think, have our support in breaking the law if they want to.

In fact, I think most people with a cursory understanding of the history of activism will be aware of the role that civil disobedience has sometimes had in correcting injustice, so breaking laws can sometimes be even virtuous. In extreme cases, one can even imagine it being morally obligatory.

I think a categorical "never commit crimes" is hard to take seriously without some explicit response to this context. I definitely don't think we should claim it's obvious that no-one should ever break the law.

It is intuitively "obvious" that Sam's crimes aren't crimes like these. (I pretty much always second-guess the word obvious, but I'm happy to use it here.) But that's because we can judge for ourselves that they're harmful and immoral, not because they're against the law. Perhaps someone could make an argument that sometimes you should follow the law even when your own morality says you should do something else, but I don't think it's going to be a simple or obvious argument.

Longtermism suggests a different focus within existential risks, because it feels very differently about "99% of humanity is destroyed, but the remaining 1% are able to rebuild civilisation" and "100% of humanity is destroyed, civilisation ends", even though from the perspective of people alive today these outcomes are very similar.

I think relative to neartermist intuitions about catastrophic risk, the particular focus on extinction increases the threat from AI and engineered biorisks relative to e.g. climate change and natural pandemics. Basically, total extinction is quite a high bar, and most easily reached by things deliberately attempting to reach it, relative to natural disasters which don't tend to counter-adapt when some survive.

Longtermism also supports research into civilisational resilience measures, like bunkers, or research into how or whether civilisation could survive and rebuild after a catastrophe.

Longtermism also lowers the probability bar that an extinction risk has to reach before being worth taking seriously. I think this used to be a bigger part of the reason why people worked on x-risk when typical risk estimates were lower; over time, as risk estimates increased. longtermism became less necessary to justify working on them.

Answering this question depends a little on having a sense of what the "non-longtermist status quo" is, but:

  • I think there's more than one popular way of thinking about issues like this,
    • in particular I think it's definitely not universal to take existential risk seriously,
  • I think common-sense and the status quo include some (at least partial) longtermism, e.g. I think popular rhetoric around climate change has often held the assumption that we were taking action primarily with our descendants in mind, rather than ourselves.

Even Alameda accepting money for FTX at all was probably bank fraud, even if they had transferred it immediately, because they told the banks that the accounts would not be used for that (there's a section in the OP about this).

See also this AML / KYC explainer, which I admit I have not read all of but seems pretty good. In particular:

Many, many crimes involve lies, but most lies told are not crimes and most lies told are not recorded for forever. We did, however, make a special rule for lies told to banks: they’re potentially very serious crimes and they will be recorded with exacting precision, for years, by one of the institutions in society most capable of keeping accurate records and most findable by agents of the state.

This means that if your crime touches money, and much crime is financially motivated, and you get beyond the threshold of crime which can be done purely offline and in cash, you will at some point attempt to interface with the banking system. And you will lie to the banks, because you need bank accounts, and you could not get accounts if you told the whole truth.

The government wants you to do this. Their first choice would be you not committing crimes, but contingent on you choosing to break the law, they prefer you also lie to a bank.

(I found out about this explainer because Matt Levine at Bloomberg linked to it; a lot of what I know about financial crime in the US I learned from his Money Stuff column)

There are some assumptions that go into what counts as "liquid", and what valuation your assets have, that may be relevant here. One big thing that I think happened is that FTX / Alameda were holding a lot of FTT (and other similar assets), whose value was sharply correlated with perceived health of FTX, meaning that while assets may have appeared to exceed liabilities, in the event of an actual bank run, some large fraction of the assets just evaporate and you're very predictably underwater. So just looking at naive dollar valuations isn't sufficient here.

(Not confident how big of an issue this is or how much your numbers already took it into account)

  • Little discussion of why or how the affiliation with SBF happened despite many well connected EAs having a low opinion of him
  • Little discussion of what led us to ignore the base rate of scamminess in crypto and how we'll avoid that in future

For both of these comments, I want a more explicit sense of what the alternative was. Many well-connected EAs had a low opinion of Sam. Some had a high opinion. Should we have stopped the high-opinion ones from affiliating with him? By what means? Equally, suppose he finds skepticism from (say) Will et al, instead of a warm welcome. He probably still starts the FTX future fund, and probably still tries to make a bunch of people regranters. He probably still talks up EA in public. What would it have taken to prevent any of the resultant harms?

Likewise, what does not ignoring the base rate of scamminess in crypto actually look like? Refusing to take any money made through crypto? Should we be shunning e.g. Vitalik Buterin now, or any of the community donors who made money speculating?

I think this isn't a good example of moral trade (afaik the issue here is an empirical one and not affected by differing moral values), but even if it was I don't think it would answer the OP's question unless you were clearer on whether the OP is Alice or Bob, and how they'd find Bob / Alice.

Load more