Comment author: [deleted] 18 August 2017 08:50:51PM 2 points [-]

Could you give specific examples of how ideas from VCs or startups could contribute a novel insight to EA prioritisation? Your links weren't very helpful on their own.

Comment author: rhys_lindmark 23 August 2017 06:18:10PM 1 point [-]

Yep yep, happy to! A couple things come to mind:

  1. We could track the "stage" of a given problem/cause area, in a similar way that startups are tracked by Seed, Series A, etc. In other words, EA prioritization would be categorized w.r.t. stages/gates. I'm not sure if there's an agreed on "stage terminology" in the EA community yet. (I know GiveWell's Incubation Grants http://www.givewell.org/research/incubation-grants and EAGrants https://www.effectivealtruism.org/grants/ are examples of recent "early stage" investment.) Here would be some example stages:

Stage 1) Medium dive into the problem area to determine ITN. Stage 2) Experiment with MVP solutions to the problem. Stage 3) Move up the hierarchy of evidence for those solutions—RCTs, etc. Stage 4) For top solutions with robust cost-effectiveness data, begin to scale.

(You could create something like a "Lean Canvas for EA Impact" that could map the prioritized derisking of these stages.)

  1. From the "future macro trends" perspective, I feel like there could be more overlap between EA and VC models that are designed to predict the future. I'm imagining this like the current co-evolving work environment with "profit-focused AI" (DeepMind, etc.) and "EA-focused AI" (OpenAI, etc.). In this area, both groups are helping each other pursue their goals. We could imagine a similar system, but for any given macro trend. i.e. That macro trend is viewed from a profit perspective and an impact/EA perspective.

In other words, this is a way for the EA community to say "The VC world has [x technological trend] high on their prioritization list. How should we take part from an EA perspective?" (And vice versa.)

(fwiw, I see two main ways the EA community interacts in this space—pursuing projects that either a) leverage or b) counteract the negative externalities of new technologies. Using VR for animal empathy is an example of leverage. AI alignment is an example of counteracting a negative externality.)

Do those examples help give a bit of specificity for how the EA + VC communities could co-evolve in "future uncertainty prediction"?

Comment author: Halstead 22 August 2017 11:41:08AM *  0 points [-]

good shout - does anyone have any thoughts on this that aren't well-known or disagree with Tetlock?

Comment author: rhys_lindmark 23 August 2017 05:37:10PM 0 points [-]

This isn't a unique thought, but I just want to make sure the EA community knows about Gnosis and Augur, decentralized prediction markets built on Ethereum.

https://gnosis.pm/

https://augur.net/

Comment author: rhys_lindmark 18 August 2017 04:38:31PM 2 points [-]

I definitely agree that information on these topics is ripe for aggregation/curation.

My instinct is to look to the VC/startup community for some insight here, specifically around uncertainty (they're in the business of "predicting/quantifying/derisking uncertain futures/projects"). Two quick examples:

I would expect an "EA-focused uncertainty model" to include gates that map a specific project through time given models of macro future trends.

Comment author: rhys_lindmark 18 August 2017 04:01:06PM 1 point [-]

Thanks for aggregating this information, Richenda! One quick bucket of thoughts around EA groups + universities:

  1. How are LEAN/CEA/EAF thinking about university chapters? Have they been an effective way of building a local community? Are there any university-focused plans going forwards?
  2. Are there other movements trying a university-focused strategy? Could we partner/learn from them? I'm thinking about something like Blockchain Education Network (see https://blockchainedu.org/ and https://medium.com/@rishipr/fa2543cdcbd8).

Thanks Richenda!

Comment author: remmelt  (EA Profile) 20 April 2017 11:51:05PM 20 points [-]

While this way of gauging feedback is far from perfect, our impression is that community feedback has been largely positive. Where we’ve received criticism it has mostly been around how we can improve the website and our communication about EA Funds as opposed to criticism about the core concept.

As much as I admire the care that has been put into EA Funds (e.g. the 'Why might you choose not to donate to this fund?' heading for each fund), this sentence came across as 'too easy' for me. To be honest, it made me wonder if the analysis was self-critical enough (I admit to having scanned it) as I'd be surprised if the trusted people you spoke with couldn't think of any significant risks. I also think 'largely positive' reception does not seem like a good indicator. If a person like Eliezer would stand out as the sole person in disagreement, that should give pause for thought.

Even though the article is an update, I'm somewhat concerned by that it goes little into possible long-term risks. One that seems especially important is the consequences of centralising fund allocation (mostly to managers connected to OP) to having a diversity of views and decentralised correction mechanisms within our community. Please let me know where you think I might have made mistakes/missed important aspects.

I especially want to refer to Rob Wiblin's earlier comment: http://effective-altruism.com/ea/17v/ea_funds_beta_launch/aco

I love EA Funds, but my main concern is that as a community we are getting closer and closer to a single point of failure. If OPP reaches the wrong conclusion about something, there's now fewer independent donors forming their own views to correct them. This was already true because of how much people used the views of OPP and its staff to guide their own decisions.

We need some diversity (or outright randomness) in funding decisions for robustness.

Comment author: rhys_lindmark 04 July 2017 07:45:07PM *  0 points [-]

One note on this: blockchain-based DAOs (decentralized autonomous organizations) are a good way to decentralize a giving body (like EAFunds). Rhodri Davies has been doing good work in this space (on AI-led DAOs for effective altruism). See https://givingthought.libsyn.com/algorithms-and-effective-altruism or my recent overview of EA + Blockchain: https://medium.com/@RhysLindmark/creating-a-humanist-blockchain-future-2-effective-altruism-blockchain-833a260724ee

View more: Prev