In response to Open Thread #36
Comment author: DiverWard 15 March 2017 10:10:36PM 3 points [-]

I am new to EA, but it seems that a true effective altruist would not be interested in retiring. When just a $1000 can avert decades of disability-adjusted life years (years of suffering), I do not think it is fair to sit back and relax (even in your 70's) when you could still be earning to give.

In response to comment by DiverWard on Open Thread #36
Comment author: Mac- 18 March 2017 01:08:42AM *  1 point [-]

I don't plan to retire, but I've been thinking recently about a related topic: what to do in very advanced age, when my health and abilities have deteriorated such that I am unable to cover cost of living.

My current plan is to donate and gift my remaining assets and take a one-way trip on Mac's Morphine Express if I find that I've outlived my usefulness. But I'm not sure, and it's easier said than done.

Comment author: Robert_Wiblin 02 March 2017 07:03:59PM 14 points [-]

I love EA Funds, but my main concern is that as a community we are getting closer and closer to a single point of failure. If OPP reaches the wrong conclusion about something, there's now fewer independent donors forming their own views to correct them. This was already true because of how much people used the views of OPP and its staff to guide their own decisions.

We need some diversity (or outright randomness) in funding decisions for robustness.

Comment author: Mac- 02 March 2017 09:27:19PM *  7 points [-]

We need some diversity...in funding decisions

I nominated Brian Tomasik for fund manager. If hired, I think that would help (assuming he wants to do it).

In response to EA Funds Beta Launch
Comment author: Mac- 01 March 2017 12:00:58AM *  3 points [-]

we plan to run the project for the next 3 months and then reassess...the main way we will assess...is total recurring donations to the EA funds and community feedback.

I don't think extrapolating from the next three months of donations will be very useful, given that it won't include peak giving season and will include US tax season.

Ignorant of the opportunity costs you face, I recommend a 12 month "runway".

Comment author: Mac- 09 February 2017 12:31:50PM 9 points [-]

In the first instance, we’re just going to have four funds...If the initial experiment goes well...running the Donor Lottery fund...we could potentially use this platform to administer moral trades between donors

FWIW, I am lukewarm on the funds idea, but excited about the Donor Lottery and most excited about the moral trade platform. I hope that if the funds idea fails, the Donor Lottery and moral trade platform are not scrapped as a result. I've never donated to CEA, but I would donate to support these two projects.

Comment author: Owen_Cotton-Barratt 31 December 2016 04:32:30PM *  0 points [-]

This seems like a reasonable concern, and longer term building good institutions for donor lotteries seems valuable.

However, I suspect there may be more overheads (and possible legal complications) associated with trying to run it as part of an existing charity. In the immediate, I wonder if there are enough people who you do trust who might give character references which would work for this? (You implied trust in GiveWell, and I believe Paul and Carl are fairly well known to several GiveWell staff; on the other hand you might think that the institutional reputation of GiveWell is more valuable than the individual reputations of people who work there, and so be more inclined to trust a project it backs not because you know more about it, but because it has more at stake.)

Comment author: Mac- 05 January 2017 01:43:33PM 0 points [-]

However, I suspect there may be more overheads (and possible legal complications) associated with trying to run it as part of an existing charity

Given the current level of interest, the informal, small, and disconnected donor lotteries may be more efficient for the reasons you mentioned. My hunch is that donor lotteries could quickly grow to a non-trivial size, at which point I believe the economies of scale achieved by an institution would dominate.

you might think that the institutional reputation of GiveWell is more valuable than the individual reputations of people who work there

Yes.

Comment author: Mac- 31 December 2016 02:57:00PM 1 point [-]

I think this is a very good idea. Unfortunately, I don't really know any of you, and I don't think it's worth the time to thoroughly research your reputations and characters, so I'm not going to contribute.

However, I would be interested in a registered charitable organization whose sole purpose is to run a donation lottery annually. In fact, I would donate to the operations of such a charity if the necessary safeguards and/or reputation were in place. Seems like an easy "bolt-on" project for GiveWell, no?

If anyone else would like to see a permanent donor lottery from GiveWell, let me know how much you're willing to contribute to start it (via private message if you prefer). I'll total the amounts in a few weeks and present to GiveWell. Maybe it will pique their interest.

Comment author: Mac- 30 November 2016 04:19:04PM *  5 points [-]

The amount of money employees at EA organisations can give is fairly small

Agreed. Is there any evidence employee donation is a significant problem, or that it will become one in the near future? If not, and given there is no obvious solution, I suggest focusing on higher priorities (e.g. VIP outreach).

Thanks to Max Dalton, Sam Deere, Will MacAskill, Michael Page, Stefan Shubert, Carl Shulman, Pablo Stafforini, Rob Wiblin, and Julia Wise for comments and contributions to the conversation.

I think too many (brain power x hours) have been expended here.

Sorry to be a downer, just trying to help optimize.

Comment author: kbog  (EA Profile) 11 November 2016 02:33:28PM *  0 points [-]

Well, the military itself spends very little on AI and none on cutting edge AI. What really matters is DARPA, IARPA, and whatever else might be going on in hidden parts of the intel community and DoD.

The bulk of AI development is the tech industry and academia. I think these areas are likely to be hurt by the administration.

Comment author: Mac- 11 November 2016 03:06:29PM *  0 points [-]

You're right, I meant to say "Increased US defense spending" not "Increased US military spending". I inaccurately use "military" as a synonym of "defense" sometimes.

From Trump's website: "Emphasize cyber warfare...and create a state-of-the-art cyber defense and offense."

I think these areas are likely to be hurt by the administration

Maybe, but I'm not so sure the effect will be large enough.

Comment author: kbog  (EA Profile) 11 November 2016 05:18:21AM 2 points [-]

Can I ask your rationale for P(AI created in next 20 years): Up?

Also note that the longer we delay the arrival of general AI, the more hardware and data will be available for it to immediately capitalize upon. So it's not entirely clear that delaying AI development is good.

Comment author: Mac- 11 November 2016 02:17:02PM *  0 points [-]

Can I ask your rationale for P(AI created in next 20 years): Up?

Increased US military spending will result in more resources attempting to create AI. In a climate of lower international cooperation, other countries will hasten their efforts as well.

So it's not entirely clear that delaying AI development is good

Agreed. Many of my updates can be viewed as good, depending on your value system and other beliefs. However, forcing myself to guess, I think we have moved further away from the ideal, cooperative future - e.g. something inspired by CEV.

Comment author: Mac- 10 November 2016 11:18:48PM *  5 points [-]

Here's a list of some updates I made, which were influenced by both the presidential and Congressional elections:

  • P(Anthropogenic global catastrophe): Up
  • P(Extinction before AI): Up
  • P(World government): Down

If AI is created:

  • P(AI created in next 20 years): Up
  • P(AI created by a military): Up
  • P(AI arms race): Up
  • P(Unfriendly AI): Up

In the near-term:

  • P(Poultry or fish included in Humane Slaughter Act): Down
  • P(Humane Slaughter Act effectively enforced): Down
  • P(Rising global productivity growth): Down
  • P(Higher US income inequality): Up

So...not great for most value systems.

View more: Next