Comment author: casebash 11 January 2018 08:41:36AM 1 point [-]

I'd also add: Get a group of people together. Easiest way is to create a Facebook Group and promote it. Getting a new cause into EA is a huge amount of work and so you don't want to try to do it single handed.

Comment author: Milan_Griffes 11 January 2018 12:51:06AM 0 points [-]

How does it imply that?

I don't have a fully articulated view here, but I think the problem lies with how the agent assesses how its approximations are doing (i.e. the procedure an agent uses to assess when an update is modeling the world more accurately or less).

Comment author: Milan_Griffes 11 January 2018 12:48:03AM *  0 points [-]

If we talk about estimates in objective and/or frequentist terms, it's equally difficult to observe the long term unfolding of the scenario.

Agreed. I think the difficulty applies to both types of estimates (sorry for being imprecise above).

Comment author: RyanCarey 10 January 2018 08:57:39PM 0 points [-]

I revisited this question earlier today. Here's my analysis with rough made-up numbers.

I think each extra time you donate blood, it saves less than 0.02 expected lives.

Suppose half the benefits come from the red blood cells.

Each blood donation gives half a unit of red blood cells. (Because a unit of blood is ~300ml)

Each red blood cell transfusion uses 2-3 units on average, and saves a life <5% of the time.

So on average every ~5 donations would save 0.1 lives (supposing the red blood cells are half of the impact)

But each marginal unit of blood is worth much less than the average because blood is kept in reserve for when it's most needed.

So it should be less than 0.02 lives saved per donation, and possibly much less. If saving a life via AMF costs a few thousand dollars, and most EAs should value their time at least tens of dollars an hour, then pretty-much all EAs should not donate their blood, at least as far as altruistic reasons go.

I could be way wrong here, especially if the components other than red blood cells are providing a large fraction of the value.

Comment author: Richenda  (EA Profile) 10 January 2018 08:37:36PM *  3 points [-]

"I’m not suggesting that quantitive facts should be ignored during the hypothesis generation stage, just that we need to understand the hypothesis space before we can choose appropriate metrics, otherwise we may artificially limit the set of theories that we consider."

I very much agree with this view methodologically. This is why we used qualitative research methods in addition to quantitative for the LEAN impact assessment. There is real risk of narrowing perspective and obscuring important factors from view if you commit to specific metrics prematurely. Qualitative research design is based on the aim of keeping the research process grounded and inductive, always responsive to unanticipated factors, regularly revisiting fundamental problem framing and steering sharply clear of methodological individualism, which is the approach you described (https://en.wikipedia.org/wiki/Methodological_individualism).

In the case of the impact assessment (where LEAN was trying to judge how effective our group support programme is, and how much impact groups have), we could look at metrics like group size, the number of individuals converted to EA, lifestyle changes, donations, pledges, events held and so forth. However qualitative interviews were used to piece together more complicated pathways that connect different nodes. The EA network is relatively small, which means that detailed examples can be very informative. I would like to see mixed methods of this kind used more.

If people want to avoid methodological individualism while still using quantitative techniques, social network analysis and multiple correspondence analysis are two quantitative techniques that many sociologists have used in order to tackle similar issues when working with much larger datasets. Social network analysis allows you to map out 'pipelines' of the kind you described in order to identify which nodes in the community are the most prominent and influential in terms of providing critical connections.

We don't, however, even need to do any more empirical analysis of EA to know that what you say is true... that many of the most important, high impact and high yield developments and achievements come down to an interaction between different community and information sources all coming together in a fortuitous way for a given trajectory. We can be sure of this not only by reflecting on examples in EA but also because this is simply a sociological human fact (often analysed and illustrated in the sprawling 'social capital' research field). The question then becomes, as you suggest, how do we cultivate the right environment for these vital spontaneous connections and interactions to take place?

My opinion on this is that we already do very well in this regard. Not through any virtue per se, other than the fact that the smaller a community is, the faster and more readily connections will arise (too small, of course, and you run out of useful nodes). However we definitely can do better, and the most urgent area for practical intervention is restructuring this forum in order to better serve the EA online community. This is something that has come out very clearly both in the 2017 Local Group Survey but also our interviews with group organisers. I think it is also quite self evident. On offer for budding EAs are either dead backwater Facebook groups with no life, or monstrous central groups with hundreds of members where only the most confident EAs feel comfortable posting. The forum is similar. Although the option of anonymity probably empowers some people to speak up, there is a much larger collective of lurkers who will feel too intimidated to contribute. A system of subforums that allow sheltered zones targetted at different kinds of EA would encourage a good deal more to come out of the woodwork and allow them to connect to one another. Individuals could then progress from a newbie friendly subforum to more 'advanced' or in depth content and conversations. I'm very happy that CEA will be taking on a restructure of the forum in the coming months.

Another area that can be optimised is the streamlining and organisation of content into a more user friendly and accessible format. This is something LEAN will be working on in the near future both in terms of making existing content more navigable but also in terms of continuing to make bespoke introductions between aligned individuals and organisations, but also helping EAs and EA groups to find one another more easily (like through our map of EAs) and through maintaining up to date contact information, and ensuring that it is easily found.

Comment author: Austen_Forrester 10 January 2018 07:11:36PM 0 points [-]

I didn't mean to imply that it was hopeless to increase charitable giving in China, rather the opposite, that it's so bad it can only go up! Besides that, I agree with all your points.

The Chinese government already provides foreign aid in Africa to make it possible to further their interests in the region. I was thinking of how we could possibly get them to expand it. The government seems almost impossible to influence, but perhaps EAs could influence African governments to try to solicit more foreign aid from China? It could have a negative consequence, however, in that receiving more aid from China may make Africa more susceptible to accepting bad trade deals, etc.

I don't know how to engage with China, but I do strongly feel that it holds huge potential for both altruism and also GCRs, which shouldn't be ignored. I like CEA's approach of seeking expertise on China generalist experts. There are a number of existing Western-China think tanks that could be useful to the movement, but I think that a "China czar" for EA is a necessity.

Comment author: Ben_West  (EA Profile) 10 January 2018 03:10:54PM 0 points [-]

Thanks. I was hoping that there would be aggregate results so I don't have to repeat the analysis. It looks like maybe that information exists elsewhere in that folder though? https://github.com/peterhurford/ea-data/tree/master/data/2017

Comment author: DonyChristie 10 January 2018 12:18:59PM 7 points [-]

Effective altruism has had three main direct broad causes (global poverty, animal rights, and far future), for quite some time.

The whole concept of EA having specific recognizable compartmentalized cause areas and charities associated with it is bankrupt and should be zapped, because it invites stagnation as founder effects entrench further every time a newcomer joins and devotes mindshare to signalling ritual adherence to the narrative of different finite tribal Houses to join and build alliances between or cannibalize, crowding out new classes of intervention and eclipsing the prerogative to optimize everything as a whole without all these distinctions. "Oh, I'm an (animal, poverty, AI) person! X-risk aversion!"

"Effective altruism" in itself should be a scaleable cause-neutral methodology de-identified from its extensional recommendations. It should stop reinforcing these arbitrary divisions as though they were somehow sancrosanct. The task is harder when people and organizations ostensibly about advancing that methodology settle into the same buildings and object-level positions, or when charity evaluators do not even strive for cause-neutrality in their consumer offerings. Not saying those can't be net-goods, but the effects on homogenization, centralization, and bias all restrict the purview of Effective Altruism.

I have often heard people worry that it’s too hard for a new cause to be accepted by the effective altruism movement.

Everyone here knows there are new causes and wants to accept them, but they don't know that everyone knows there are new causes, etc, a common-knowledge problem. They're waiting for chosen ones to update the leaderboard.

If the tribally-approved list were opened it would quickly spiral out of working memory bounds. This is a difficult problem to work with but not impossible. Let's make the list and put it somewhere prominent for salient access.

Anyway, here is an experimental Facebook group explicitly for initial cause proposal and analysis. Join if you're interested in doing these!

Comment author: mhpage 10 January 2018 12:04:57PM 14 points [-]

This comment is not directly related to your post: I don't think the long-run future should be viewed of as a cause area. It's simply where most sentient beings live (or might live), and therefore it's a potential treasure trove of cause areas (or problems) that should be mined. Misaligned AI leading to an existential catastrophe is an example of a problem that impacts the long-run future, but there are so, so many more. Pandemic risk is a distinct problem. Indeed, there are so many more problems even if you're just thinking about the possible impacts of AI.

Comment author: Lila 10 January 2018 03:24:42AM 0 points [-]

Why?

Comment author: DavidMoss 10 January 2018 02:50:53AM 1 point [-]
Comment author: enkin 10 January 2018 12:27:02AM 0 points [-]

I would far prefer dying immediately to being raped again.

Comment author: JesseClifton 09 January 2018 11:47:33PM 0 points [-]

Distribution P is your credence. So you are saying "I am worried that my credences don't have to do with my credence." That doesn't make sense. And sure we're uncertain of whether our beliefs are accurate, but I don't see what the problem with that is.

I’m having difficulty parsing the statement you’ve attributed to me, or mapping it what I’ve said. In any case, I think many people share the intuition that “frequentist” properties of one’s credences matter. People care about calibration training and Brier scores, for instance. It’s not immediately clear to me why it’s nonsensical to say “P is my credence, but should I trust it?”

Comment author: kbog  (EA Profile) 09 January 2018 08:17:20PM *  0 points [-]

It sounds to me like this scenario is about a difference in the variances of the respective subjective probability distributions over future stock values. The variance of a distribution of credences does not measure how “well or poorly supported by evidence” that distribution is.

Well in this case at least, it is apparent that the differences are caused by how well or poorly supported people's beliefs are. It doesn't say anything about variance in general.

My worry about statements of the form “My credences over the total future utility given intervention A are characterized by distribution P” does not have to do with the variance of the distribution P. It has to do with the fact that I do not know whether I should trust the procedures that generated P to track reality.

Distribution P is your credence. So you are saying "I am worried that my credences don't have to do with my credence." That doesn't make sense. And sure we're uncertain of whether our beliefs are accurate, but I don't see what the problem with that is.

Comment author: Ben_West  (EA Profile) 09 January 2018 06:08:49PM 0 points [-]

Is it possible to get the data behind these graphs from somewhere? (i.e. I want the numerical counts instead of trying to eyeball it from the graph.)

Comment author: JesseClifton 08 January 2018 10:21:40PM *  0 points [-]

It sounds to me like this scenario is about a difference in the variances of the respective subjective probability distributions over future stock values. The variance of a distribution of credences does not measure how “well or poorly supported by evidence” that distribution is.

My worry about statements of the form “My credences over the total future utility given intervention A are characterized by distribution P” does not have to do with the variance of the distribution P. It has to do with the fact that I do not know whether I should trust the procedures that generated P to track reality.

Comment author: kbog  (EA Profile) 08 January 2018 05:57:15PM *  0 points [-]

It means that your credence will change little (or a lot) depending on information which you don't have.

For instance, if I know nothing about Pepsi then I may have a 50% credence that their stock is going to beat the market next month. However, if I talk to a company insider who tells me why their company is better than the market thinks, I may update to 55% credence.

On the other hand, suppose I don't talk to that guy, but I did spend the last week talking to lots of people in the company and analyzing a lot of hidden information about them which is not available to the market. And I have found that there is no overall reason to expect them to beat the market or not - the info is good just as much as it is bad. So I again have a 50% credence. However, if I talk to that one guy who tells me why the company is great, I won't update to 55% credence, I'll update to 51% or not at all.

Both people here are being perfect Bayesians. Before talking to the one guy, they both have 50% credence. But the latter person has more reason to be surprised if Pepsi diverges from the mean expectation.

Comment author: ThomasSittler 08 January 2018 12:25:07PM 0 points [-]

for the reasons i explained above, i think many economists believe this is true (in many situations). But they don't use the argument you attribute to them to do so.

Comment author: DonyChristie 08 January 2018 05:27:06AM 2 points [-]

For more speculative things, we want to put part of the money towards a project that a friend we know through the Effective Altruism movement is starting. In general I think this is a good way for people to get funding for early stage projects, presenting their case to people who know them and have a good sense of how to evaluate their plans.

What is the project (at the finest granularity of detail you are comfortable disclosing)?

Comment author: DonyChristie 08 January 2018 05:22:33AM 4 points [-]

View more: Prev | Next