R

Raven

44 karmaJoined

Posts
1

Sorted by New

Comments
5

The 'lessons learned' listed in this forum post seem obvious. I googled "tips for running for congress" and in 10 minutes read through several resources that gave most of these same lessons learned. I expect a 30 min call with a Democrat strategist, of which there are several in the EA movement, would have also given the same lessons learned, and probably would have given a more accurate prediction on the election outcome than the prediction markets cited in this post.

People who worked on the campaign can speak to this better than I can, but I would give them more  credit for doing reasonable due diligence. I have a strong expectation that:

  • There were lots of Democratic strategists involved
  • There were lots of attempts at polling / predicting the race

I also think there can be a meaningful difference between knowing on paper that "having connections in the district is important" and "spending money can help you win" and "having a voting record is helpful", and seeing how those factors actually play out in practice. That said, I hope (and expect) that there was more "know-how" generated by the race than just the lessons reflected in this post. 

Interesting, thanks! Any thoughts on how we should think about the relative contributions and specialization level of these different authors? ie,  a world of maximally important intangibles might be one where each author was responsible for tweaking a separate, important piece of the training process. 

My rough guess is that it's more like 2-5 subteams working on somewhat specialized things, with some teams being moderately more important and/or more specialized than others. 

Does that framing make sense, and if so, yeah, what do you think?

Answer by Raven6
0
0

Paul Christiano thinks there's a 1/3 chance Tesla gets fully self-driving cars by 2024, and expects that conditional on that their market cap has probably >tripled to >$3T. that's pretty insane commercial value right there.

In scientific applications, one obvious thought is advances on AlphaFold that enable better drug design. I'm not a domain expert, but I think that might require significant improvements from AlphaFold v2-- moving beyond crystal structure prediction to in-solution structure prediction and to protein-protein interaction modeling. 

I've heard from two casual programmer friends that AI programming assistants like Github Copilot are impressively good. They make it easier to write various finicky pieces of code, and help fix bugs. It seems to me like this could be really impactful if it turns out to help professional programmers; there's a lot of value to add, and potentially this could be turned towards AI programming itself...

Thanks for sharing this, Zoe!

I think your piece is valuable as a summary of weaknesses in existing longtermist thinking, though I don't agree with all your points or the ways you frame them.

Things that would make me excited to read future work, and IMO would make that work stronger:

  • Providing more concrete suggestions for improvement. Criticism is valuable, but I'm aware of many of the weaknesses of our frameworks; what I'm really hungry for is further work on solving them. This probably requires focusing down to specific areas, rather than casting a wide net as you did for this summary paper. 
  • Engaging with the nuances of longtermist thinking on these subjects. For example, when you mention the importance of risk-factor assessment, I don't see much engagement with e.g. the risk factor / threat / vulnerability model, or with the paper on defense in depth against AI risk. Neither of these models are perfect, but I expect they both have useful things to offer.
    • I expect this links up with the above point. Starting from a viewpoint of what-can-I-build  encourages finding the strong points of prior work, rather than the weak points you focused on in this piece.

With regard to harshness, I think part of the reason you get different responses is because you're writing in the genre of the academic paper. Since authors have to write in a particular formal style, it's ambiguous whether they intend a value judgment. Often authors do want readers to come away with a particular view, so it's not crazy to read their judgments into the text, but different readers will draw different conclusions about what  you want them to feel or believe.

For example:

Under the TUA, an existential risk is understood as one with the potential to cause human
extinction directly or lead us to fail to reach our future potential, expected value, or
technological maturity. This means that what is classified as a prioritised “risk” depends on a
threat model that involves considerable speculation about the mechanisms which can result in the death of all humans, their respective likelihoods, and a speculative and morally loaded
assessment of what might constitute our inability to reach our potential.

[...]

A risk perception that depends so strongly on speculation and yet-to-be-verified assumptions will inevitably (to varying degrees) be an expression of researchers’ personal preferences, biases, and imagination. If collective resources (such as research funding and public attention) are to be allocated to the highest priority risk, then ERS should attempt to find a more evidence-based, replicable prioritisation procedure.

As with many points in your paper, this is literally true, and I appreciate you raising awareness of it! In a different context, I might read this as basically a value-neutral call to arms. Given the context, it's easy to read into it some amount of value judgment around longtermism and longtermists.