Comment author: Milan_Griffes 20 December 2017 09:35:35PM 0 points [-]

We're on the cusp of being able to maybe buy a permanent venue

Are you mostly searching for venues in the Bay Area (+ venues within day-driving distance of the Bay Area)?

Comment author: Milan_Griffes 20 December 2017 09:39:57PM *  1 point [-]
Comment author: Milan_Griffes 20 December 2017 09:35:35PM 0 points [-]

We're on the cusp of being able to maybe buy a permanent venue

Are you mostly searching for venues in the Bay Area (+ venues within day-driving distance of the Bay Area)?

Comment author: TimothyTelleenLawton 20 December 2017 06:34:06PM *  10 points [-]

I’d like to give a quick update on my plans for the 2016 Donation Lottery winnings.

Of the $45,650, I’ve decided to give $21,000 to the Czech Association for Effective Altruism so they can hire one full time staff (or equivalent) for one year to manage the organization. I have not yet transferred that money, nor decided how to allocate the other $24,650.

I decided to support the Czech Association for Effective Altruism because I am impressed with their ability to execute difficult projects, I believe their projects have the potential to make a large positive impact (including via the impact on the chapter members executing them), I believe they will be able to execute substantially more and higher-quality projects with employed leadership than without one, and I believe funding is the limiting factor for the chapter hiring leadership staff.

I became aware of the Czech Association for Effective Altruism (The Chapter) when they hosted 2 CFAR workshops near Prague in October 2017; CFAR hired me to be one of a handful of instructors for those workshops. Some observations and beliefs from spending time with a few of the leaders from the chapter:

  • The Chapter successfully caused there to be CFAR workshops in Europe in 2017 that wouldn’t have happened otherwise. The demand for the workshops was high enough to justify two workshops in rapid succession. Hosting these workshops was one of a few major priorities for The Chapter in 2017.
  • The Chapter handled virtually all of the operations for the two workshops (~10 staff and ~30 participants each workshop), including finding a venue with relatively narrow specifications and providing lodging, food, local transportation, supplies, and instructor support. While there were some hiccups in the operations, it generally went very well, and better than I (and most CFAR staff with whom I discussed it) had expected from a first-time crew. At least one CFAR instructor believed that the operations at the Prague workshops were even better than they are for the typical CFAR workshop in the Bay Area, where they are generally managed by a CFAR employee with support from volunteers.
  • The leaders of The Chapter seem to be observant, thoughtful, self-critical, and dedicated. These attributes make me much more confident that they will be successful, particularly for their ability to observe problems and make adjustments accordingly over time.
  • The Chapter seems less well connected to the global EA movement and possible funders than other equivalently talented EAs with which I’m familiar. I also expect that the global movement would benefit from The Chapter being more influential within it.

Some expectations related to the donation:

  • Much of the success of The Chapter in 2017 seems to be attributable to having a Director that was spending approximately full-time on the chapter (despite very little compensation). The past Director recently left to acquire a paid full-time job, and I expect The Chapter’s effectiveness to drop substantially if they are not able to hire replacement leadership.
  • The Chapter believes that the staff they hire with this donation will be able to lead fundraising efforts to support their own salary and the rest of The Chapter budget for future years.
  • I intend to only make this donation if I can do so legally. The donation process may involve donating the money to another non-profit (with 501(c)3 tax advantaged status) that would in turn consider supporting The Chapter. If not all of the money is passed on to The Chapter, it will reduce the efficiency of the donation. I hope for The Chapter to receive about $20k, since that is what they estimate they need to hire leadership for one year (and they believe other donations can cover their other budgetary needs). I expect I will need to allocate about $21k in order for The Chapter to likely receive $20k.

I’m planning to post audio of my last interview with The Chapter, as well as budgetary and strategic information that The Chapter has shared with me.

Edits: inserted the organization's official name, "Czech Association for Effective Altruism", and corrected bullet formatting.

Comment author: Milan_Griffes 20 December 2017 09:33:19PM 0 points [-]

Thanks for the update!

The past Director recently left to acquire a paid full-time job, and I expect The Chapter’s effectiveness to drop substantially if they are not able to hire replacement leadership.

Do you know if the chapter is planning to hire back the outgoing director, or hire a different replacement director?

Comment author: Milan_Griffes 29 November 2017 04:33:30AM 3 points [-]

The study of North Korea may produce insight into how dystopian societal attractor points can be averted or what preventive measures (beyond what is present in today’s North Korea) might help people on the inside destabilize them.

This is a great point.

In response to What consequences?
Comment author: kbog  (EA Profile) 28 November 2017 06:10:14AM 1 point [-]

It's worth noting that long-run consequences doesn't necessarily imply just looking at x-risks. A fully fleshed out long-run evaluation looks at many factors of civilization quality and safety, and I think it is good enough to dominate other considerations. It's certainly better than allowing mere x-risk concerns to dominate.

But this objection only highlights the difficulty presented by cluelessness. In a very literal sense, a physician in this position is clueless about what action would be best.

I don't think this is true. Killing a random baby on the off chance that it might become a dictator is a bad idea. You can do the math on that if you want, or just trust me that the expected consequences of it are hurtful to society.

In response to comment by kbog  (EA Profile) on What consequences?
Comment author: Milan_Griffes 29 November 2017 04:24:05AM 1 point [-]

Intuitively, I completely agree that killing a random baby is socially harmful.

The example is interesting because it's tricky to "do the math" on. (Hard to arrive at a believable long-run cost of a totalitarian dictatorship; hard to arrive at a believable long-run cost of instituting a social norm of infanticide.)

In response to What consequences?
Comment author: JesseClifton 24 November 2017 09:28:01PM 2 points [-]

Thanks for writing this. I think the problem of cluelessness has not received as much attention as it should.

I’d add that, in addition to the brute good and x-risks approaches, there are approaches which attempt to reduce the likelihood of dystopian long-run scenarios. These include suffering-focused AI safety and values-spreading. Cluelessness may still plague these approaches, but one might argue that they are more robust to both empirical and moral uncertainty.

Comment author: Milan_Griffes 29 November 2017 04:19:43AM 0 points [-]

Good point, I was implicitly considering s-risks as a subset of x-risks.

In response to What consequences?
Comment author: MichaelPlant 28 November 2017 09:12:53PM 3 points [-]

A potential objection here is that the Austrian physician could in no way have foreseen that the infant they were called to tend to would later become a terrible dictator, so the physician should have done what seemed best given the information they could uncover. But this objection only highlights the difficulty presented by cluelessness. In a very literal sense, a physician in this position is clueless about what action would be best. Assessing only proximate consequences would provide some guidance about what action to take, but this guidance would not necessarily point to the action with the best consequences in the long run.

I think this example undermines, rather than supports, your point. Of course it's possible the baby would have grown up to be Hitler. It's also possible the baby would have grown up to be a great scientist. Hence, from the perspective of the doctor, who is presumably working on expected value and has no reason to think one special case is more likely than the other, these presumably just do cancel out. Hence the doctors looks the obvious causes. This seems like a case of what Greaves calls simple cluelessness.

A couple of general comments. There is already an academic literature of cluelessness and it's known to some EAs. It would be helpful therefore if you make it clear what you're doing that's novel. I don't mean this in a disparaging way. I simply can't tell if you're disagreeing with Greaves et al. or not. If you are, that's potentially very interesting and I want to know what the disagreement exactly is so I can assess it and see if I want to take your side. If you're not presenting a new line of thought, but just summarising or restating what others have said (perhaps in an effort to bring this information to new audiences, or just for your own benefit) you should say that instead so that people can better decided how closely to read it.

Additionally, I think it's unhelpful to (re)invent new terminology without a good reason. I can't tell the clear different between proximate, indirect and long-run consequences. I would much have preferred it if you'd explained cluelueness using Greaves' set up and then progressed from there as appropriate.

Comment author: Milan_Griffes 29 November 2017 04:16:26AM *  2 points [-]

There is already an academic literature of cluelessness and it's known to some EAs. It would be helpful therefore if you make it clear what you're doing that's novel ...

Do you know of worthwhile work on this beyond Greaves 2016? (Please point me to it, if you do!)

Greaves 2016 is the most useful academic work I've come across on this question; I was convinced by their arguments against Lenman 2000.

I stated my goal at the top of the piece.

I would much have preferred it if you'd explained cluelueness using Greaves' set up and then progressed from there as appropriate.

I don't think Greaves presented an analogous terminology?

"Flow-through effects" & "knock-on effects" have been used previously, but they don't distinguish between temporally near & temporally distant effects. That distinction seems interesting, so I decided to not those terms.

In response to What consequences?
Comment author: MichaelPlant 28 November 2017 09:12:53PM 3 points [-]

A potential objection here is that the Austrian physician could in no way have foreseen that the infant they were called to tend to would later become a terrible dictator, so the physician should have done what seemed best given the information they could uncover. But this objection only highlights the difficulty presented by cluelessness. In a very literal sense, a physician in this position is clueless about what action would be best. Assessing only proximate consequences would provide some guidance about what action to take, but this guidance would not necessarily point to the action with the best consequences in the long run.

I think this example undermines, rather than supports, your point. Of course it's possible the baby would have grown up to be Hitler. It's also possible the baby would have grown up to be a great scientist. Hence, from the perspective of the doctor, who is presumably working on expected value and has no reason to think one special case is more likely than the other, these presumably just do cancel out. Hence the doctors looks the obvious causes. This seems like a case of what Greaves calls simple cluelessness.

A couple of general comments. There is already an academic literature of cluelessness and it's known to some EAs. It would be helpful therefore if you make it clear what you're doing that's novel. I don't mean this in a disparaging way. I simply can't tell if you're disagreeing with Greaves et al. or not. If you are, that's potentially very interesting and I want to know what the disagreement exactly is so I can assess it and see if I want to take your side. If you're not presenting a new line of thought, but just summarising or restating what others have said (perhaps in an effort to bring this information to new audiences, or just for your own benefit) you should say that instead so that people can better decided how closely to read it.

Additionally, I think it's unhelpful to (re)invent new terminology without a good reason. I can't tell the clear different between proximate, indirect and long-run consequences. I would much have preferred it if you'd explained cluelueness using Greaves' set up and then progressed from there as appropriate.

Comment author: Milan_Griffes 29 November 2017 04:08:29AM *  1 point [-]

Thanks for the thoughtful comment :-)

This seems like a case of what Greaves calls simple cluelessness.

I'm fuzzy on Greaves' distinction between simple & complex cluelessness. Greaves uses the notion of "systematic tendency" to draw out complex cluelessness from simple, but "This talk of ‘having some reasons’ and ‘systematic tendencies’ is not as precise as one would like;" (from p. 9 of Greaves 2016).

Perhaps it comes down to symmetry. When we notice that for every imagined consequence, there is an equal & opposite consequence that feels about as likely, we can consider our cluelessness "simple." But when we can't do this, our cluelessness is complex.

This criterion is unsatisfyingly subjective though, because it relies on our assessing the equal-opposite consequence as "about as likely," plus relying on whether we are able to imagine an equal-opposite consequence or not.

10

What consequences?

This is the first in a series of posts exploring consequentialist cluelessness and its implications for effective altruism: This post describes cluelessness & its relevance to EA; arguing that for many popular EA interventions we don’t have a clue about the intervention’s overall net impact. The second post will examine... Read More
Comment author: Elizabeth 21 November 2017 02:51:00AM 1 point [-]

Oops, thanks for the correction. Do you have those broken out separately?

Comment author: Milan_Griffes 21 November 2017 05:37:55PM *  1 point [-]

Yes, see rows 4 - 17 in our model:

https://docs.google.com/spreadsheets/d/1i6aRlYiITg_birU6rW7FuN9O18X6zLucbCsdPRAkR9E/edit?usp=sharing

Best-guess is that the ballot initiative costs ~$16 million all-in, whereas the yearly cost per treatment is ~$2.6 billion.

We haven't yet figured out a believable way to separate out the portion of benefit attributable to the ballot initiative compared to the portion of benefit attributable to the treatment itself.

View more: Prev | Next