M

milkyway

144 karmaJoined

Comments
9

Generally, I am very curious to learn more about alternative worldviews to EA that also engage with existential risk in epistemically sound ways.

 

I'd be careful not to confuse polished presentation, eloquent speaking and fundraising ability with good epistemics.

I watched the linked video and honestly thought it was a car crash epistemically speaking.

The main issue is I don't think any of her arguments would pass the ideological turing test. She says "Will MacAskill thinks X..." but if Will MacAskill was in the room he would obviously respond "Sorry no, that's not what I think at all..."

A real low point is when she points at a picture of Nick Bostrom, Stuart Russell, Elon Musk, Jaan Tallinn etc. and suggests their motivation for working on AI is to prove that men are superior to women. 

I appreciate the thoughtful reply.

(1) I don't think that the engineered pandemics argument is of the same type as the Flat Earther or Creationist arguments. And it's not the kind of argument that requires a PhD in biochemistry to follow either. But I guess from your point of view there's no reason to trust me on that? I'm not sure where to go from there.

I've got to with ... not just vibes, exactly, but a kind of human approach to the numbers of people who believe things on both sides of the argument, how plausible they are and so on.

Maybe one question is: why do you think engineered pandemics are implausible?

(2) I agree that you should start from a position of skepticism when people say the End is Nigh. But I don't think it should be a complete prohibition on considering those arguments. 

And the fact that previous predictions have proven overblown is a pattern worth paying attention to (although as an aside: I think people were right to worry during the cold war — we really did come close to full nuclear exchange on more than 1 occasion! The fact that we got through it unscathed doesn't mean they were wrong to worry. If somebody played Russian Roulette and survived you shouldn't conclude "look, Russian Roulette is completely safe."). Where I think the pattern of overblown predictions of doom has a risk of breaking down is when you introduce dangerous new technologies. I don't expect technology to remain roughly at current levels. I expect technology to be very different in 25, 50, 100 years' time. Previous centuries have been relatively stable because no new dangerous technologies were invented (nuclear weapons aside). You can't extrapolate that pattern into the future if the future contains for example easily available machines that can print Covid-19 but with 10x transmissibility and a 50% mortality rate. Part of my brain wants to say "We will rise to the challenge! Some hero will emerge at the last moment and save the day" but then I remember the universe runs on science and not movie plot lines.

What kind of approach is the right one to take to carrying out such an endeavour? Surely there is only one answer: a conservative approach. One that prioritises good judgment, caution and prudence; one that values avoiding negative outcomes well above achieving positive ones.

 

Really interesting read!

Would you agree that an underlying assumption of conservatism is that continuing 'business as usual' is the safe option?

In Bioterrorism and AI Safety the assumption is that we're on course for disasters that results in billions of deaths unless we do something radical to change course.

Whether you agree about the risks of Bioterrorism and AGI shouldn't be about a general vibe you pick up of  "science fiction scenario(s)" or being on "the crazy train to absurd beliefs". I think it should be about engaging with those arguments on the object level. Sam Harris / Rob Reid's podcast and  Robert Miles' YouTube channel are great ways in if you're interested.  

One thing that might be going on is native English speakers are afraid that speaking slowly comes across as patronizing or racist.

Native speakers might be most used to slowing down and simplifying their English vocabulary when speaking to young children. So they're afraid of accidentally causing offense by treating you like a child.  

There's a racist (or more accurately: xenophobic) stereotype that 'foreigners are stupid' that people want to avoid at all costs. If you assume someone's English proficiency is low because they have a foreign accent and you're wrong you look like a racist.

So perhaps the default is people will speak quickly until you give them permission to speak slowly and then they're happy to.

I certainly wouldn't mind if someone just communicated directly and cleared up the whole mess by saying "hey, just to let you know I've only been learning English a few years. I can keep up with all the maths and concepts no problem but would you mind speaking a bit more slowly and clearly because English is my second language?" followed by a friendly smile. And then you could even coach me a bit as we talk on how slow to speak and what complexity of words to use. I can't imagine anyone at an EA conference responding badly to that if you keep the tone friendly and collaborative.

EDIT: Of course the native speaker in this situation could also take the lead and say "hey, just to check, I don't want to assume anything about your level of English. Is this about the right level of English speed and complexity? I know some people prefer I speak more slowly..."

Bottom line: for the highest chance of success both parties should attempt to take responsibility to communicate openly and directly about what complexity of English they should be using, that way at least one might address and solve the problem.

I was pretty shook up by Yudkowsky's Death With Dignity post and spent a few weeks in a daze. Eventually my emotions settled down and I thought about helpful ways to frame and orientate myself to it. 

  1. Don't just flush the arguments and evidence away because they're scary
  2. I was expecting to die of natural causes in ~50 years time anyway. If it's now ~20 years, emotionally that's still in the bucket of 'a really long time in the future'
  3. Lots of intelligent people whose thinking I trust put p(doom) below 90%. If somebody offered me the bargain: 90% chance of death in 20 years' time but you get a 10% shot at living in an AGI created utopia forever, I'd take that deal.

I made some medium-sized changes to my savings, career and health strategies.

And I'm feeling kind of okay about everything now. 

I realize that all of that is framed very selfishly. None of those things address the issue that humanity gets wiped out, but apparently the emotional bit of me that was freaking out only cared about itself.

If 100% of these suggestions were implemented I would expect in 5 years' time EA to look significantly worse (less effective, helping less people/animals and possibly having more FTX type scandals).

If the best 10% were implemented I could imagine that being an improvement.

Perhaps another EA donor could sign something guaranteeing to reimburse FF grantees in the event of clawbacks? 

I imagine a lot of people are now in the awkward situation of having money in their bank that they want to spend on projects but are hesitant to because it might be clawed back at some unknown time in the next 2 years.

Counter-argument: With less EA funding now available the bar on grant applications needs to be shifted higher, so all FF grantees shouldn't be funded now.  The money could be spent better elsewhere.

Counter-counter-argument: If you're insuring pre-vetted grants (so no additional work for grant evaluators), there's only a 10-30% chance (wild guess) that you  pay out on the insurance guarantees, possibly several years down the line and you get second order positive effects via encouraging future EA project founders to take risks... maybe those multiples shift these clawback guarantees back above the line?

(Disclosure: I'm a FF grant recipient so very biased)

EDIT: I no longer endorse the idea above. One thing I hadn't understood was that in this kind of situation preference and fraudulent transfer claims usually  involve the two parties working out a settlement rather than litigating. Having a guarantee in place would change the dynamics of those negotiations.

My grant agreement and the bank transfer both mention "FTX Philanthropy, Inc."

From the linked document on clawbacks

What about non-US recipients?

The debtor could bring an avoidance action in the US against a non-US recipient. The recipient would then have to decide whether to litigate in the US.  It could file a motion that the bankruptcy court lacks personal jurisdiction over it. The debtor could then seek to seize its assets if it had any in the US, or haul into court in US if has a hook for jurisdiction, or if it had no assets in the US/ no jurisdiction, then there would be a question of whether the foreign court would let the debtor pursue the default judgment in that court. This takes substantial legal resources, so would only be worth it for larger sums.

 

Are you able to give a sense of what you mean by larger sums? $50k? $100k? $1m?