Hide table of contents

I think that changing our minds and owning up to past mistakes are two of the key virtues of effective altruism. I’d love to acknowledge instances of this and try to collect such stories as examples to follow.

Here are some examples to start with: 

Do you know of others? Please feel free to share examples of yourself doing this.


I think the best examples would have the following properties: 

  • An important error or mind-change
  • A public change of conclusion or explanation of the error
  • In a context or conversation that’s relevant to effective altruism

Also relevant: 

P.S. This is partially prompted by the Criticisms Contest. I’d be excited to see people engage with criticisms or red teaming of their work — especially when this leads to real changes of position or conclusion. 


 

New Answer
New Comment

10 Answers sorted by

Given that Greg trained as an MD because he wanted to do good, this here probably counts: https://80000hours.org/2012/08/how-many-lives-does-a-doctor-save/

(and the many medical doctors and students who read posts like this and then also changed their minds, including me :-) )

Good point, thanks! I'm really impressed, seems like a very hard switch to make. 

A personal example: I wrote Should Global Poverty Donors Give Now or Later? and then later realized my approach was totally wrong.

Thanks a bunch for sharing this! I think this is really cool. 

https://www.jefftk.com/p/revisiting-why-global-poverty

Ajeya posted an update to her AI timelines report:

My personal timelines have also gotten considerably shorter over this period. I now expect something roughly like this:

  • ~15% probability by 2030 (a decrease of ~6 years from 2036).
  • ~35% probability by 2036 (a ~3x likelihood ratio[3] vs 15%).
    • This implies that each year in the 6 year period from 2030 to 2036 has an average of over 3% probability of TAI occurring in that particular year (smaller earlier and larger later).
  • A median of ~2040 (a decrease of ~10 years from 2050).
    • This implies that each year in the 4 year period from 2036 to 2040 has an average of almost 4% probability of TAI.
  • ~60% probability by 2050 (a ~1.5x likelihood ratio vs 50%).

This doesn't have all of the properties you're most looking for, but one example is this video by YouTuber Justin Helps which is about correcting an error in an earlier video and explaining why he might have made that error. (I don't quite remember what the original error was about – I think something to do with Hamilton's rule in population genetics.)

Thank you!

Students for High-Impact Charity shut down based on their results:

  • Within a year of instructor-led workshops, we presented 106 workshops, reaching 2,580 participants at 40 (mostly high school) institutions. We experienced strong student engagement and encouraging feedback from both teachers and students. However, we struggled in getting students to opt into advanced programming, which was our behavioral proxy for further engagement.

Joe Carlsmith's report on power seeking AI says this (emphasis added):

I assign rough subjective credences to the premises in this argument, and I end up with an overall estimate of ~5% that an existential catastrophe of this kind will occur by 2070. (May 2022 update: since making this report public in April 2021, my estimate here has gone up, and is now at >10%.)

Comments1
Sorted by Click to highlight new comments since: Today at 8:07 PM

Last week, EA NYC had a lightning talks event themed around "Something I Changed My Mind About" and I'd encourage others to host similar events as a low-key and fun way to encourage this community norm.

More from Lizka
Curated and popular this week
Relevant opportunities