Comment author: DonyChristie 04 May 2018 04:36:18AM 1 point [-]

Someone just try and build something.

Comment author: DonyChristie 22 January 2018 05:19:13AM 1 point [-]

Global catastrophic risks: North Korea: Fund ‘Flash Drives for freedom’, which smuggles flash drives with unbiased information into North Korea. Such an approach was implicitly endorsed in November by Thae Yong-ho, once number two at North Korea’s London embassy and now defector. There’s also academic analysis of this isolation being one of the reasons for the lack of uprising in North Korea.

Any thoughts on the expected value of this in particular? It says $1 ~ 1 flash drive.

Comment author: DonyChristie 20 January 2018 11:32:10PM *  2 points [-]

The political mobilization you are prematurely demanding to rectify the laundry list of concerns you present is first contingent on individuals like myself being persuaded by the veracity of your claims, which this post makes a lot of, the conjunction of which is exceedingly improbable. It would be easier for me to be persuaded if one concrete opportunity for intervention was first expounded on, such as this pipeline (or whichever is the best specific intervention here), its cost-effectiveness in creating QALYs (or your preferred measure), and how the resulting expected output of our contributions would compare to other potential effective interventions in a similar class of human-concernedness such as ALLFED, AMF, or biosecurity, or even more dissimilar ones like AGI alignment, animal welfare, etc, rather than presenting shock that we do not hold the same inside view on what is literally the most important thing to do with one's resources.

This recently made guide on introducing new interventions to aspiring effective altruists, if followed, will help achieve that. You can also post any calculations in this group and receive feedback. Effective Environmentalism might interest you as well. :)

Comment author: DonyChristie 10 January 2018 12:18:59PM 7 points [-]

Effective altruism has had three main direct broad causes (global poverty, animal rights, and far future), for quite some time.

The whole concept of EA having specific recognizable compartmentalized cause areas and charities associated with it is bankrupt and should be zapped, because it invites stagnation as founder effects entrench further every time a newcomer joins and devotes mindshare to signalling ritual adherence to the narrative of different finite tribal Houses to join and build alliances between or cannibalize, crowding out new classes of intervention and eclipsing the prerogative to optimize everything as a whole without all these distinctions. "Oh, I'm an (animal, poverty, AI) person! X-risk aversion!"

"Effective altruism" in itself should be a scaleable cause-neutral methodology de-identified from its extensional recommendations. It should stop reinforcing these arbitrary divisions as though they were somehow sancrosanct. The task is harder when people and organizations ostensibly about advancing that methodology settle into the same buildings and object-level positions, or when charity evaluators do not even strive for cause-neutrality in their consumer offerings. Not saying those can't be net-goods, but the effects on homogenization, centralization, and bias all restrict the purview of Effective Altruism.

I have often heard people worry that it’s too hard for a new cause to be accepted by the effective altruism movement.

Everyone here knows there are new causes and wants to accept them, but they don't know that everyone knows there are new causes, etc, a common-knowledge problem. They're waiting for chosen ones to update the leaderboard.

If the tribally-approved list were opened it would quickly spiral out of working memory bounds. This is a difficult problem to work with but not impossible. Let's make the list and put it somewhere prominent for salient access.

Anyway, here is an experimental Facebook group explicitly for initial cause proposal and analysis. Join if you're interested in doing these!

Comment author: DonyChristie 08 January 2018 05:27:06AM 2 points [-]

For more speculative things, we want to put part of the money towards a project that a friend we know through the Effective Altruism movement is starting. In general I think this is a good way for people to get funding for early stage projects, presenting their case to people who know them and have a good sense of how to evaluate their plans.

What is the project (at the finest granularity of detail you are comfortable disclosing)?

Comment author: Nekoinentr 04 January 2018 07:34:04AM 2 points [-]

For example, suppose you see an idea for an effective charity on Charity Science. You contact them and they provide you with advice and link you up with potential cofounders.

Have they done this for anyone?

Comment author: DonyChristie 08 January 2018 05:22:33AM 4 points [-]
Comment author: itaibn 14 December 2017 04:54:50PM 6 points [-]

I've spent some time thinking and investigating what the current state of affairs is, and here's my conclusions:

I've been reading through PineappleFund's comments. Many are responses to solicitations for specific charities with him endorsing them as possibilities. One of these was for SENS foundation. Matthew_Barnett suggested that this is evidence that he particularly cares about long-term future causes, but given the diversity of other causes he endorsed I think it is pretty weak evidence.

They haven't yet commented on any of the subthreads specifically discussing EA. However, these subthreads are high up on the Reddit sorting algorithm and have many comments endorsing EA. This is already a good position and is difficult to improve: They either like what they see or they don't. It may be better if the top-level comments explicitly described and linked to a specific charity since that is what they responded well to in other comments, but I am cautious about making such surface-level generalizations which might have more to do with the distribution of existing comments than PineappleFund's tendencies.

Keep in mind that soliciting upvotes for a comment is explicitly against Reddit rules. I understand if you think that the stakes of this situation are more important than these rules, but be sure you are consciously aware of the judgment you have made.

Comment author: DonyChristie 14 December 2017 09:09:54PM 6 points [-]

Keep in mind that soliciting upvotes for a comment is explicitly against Reddit rules. I understand if you think that the stakes of this situation are more important than these rules, but be sure you are consciously aware of the judgment you have made.

Oh dear! No, I didn't explicitly realize this beyond passing thoughts. In retrospect, I'm confused why this wasn't cached in my mind as being against reddiquette. I should eat my own dogfood regarding brigading. I edited it so it's not soliciting. Let me know here or privately if there are any further fixes I should make to the post (i.e. if I should just remove the links to the known EA comments).

Comment author: DonyChristie 27 November 2017 09:48:25AM 0 points [-]

Did you look into coherence therapy or other modalities that use memory reconsolidation? It is theoretically more potent than CBT.

Comment author: DonyChristie 11 November 2017 11:31:32PM 0 points [-]

Having now installed the userstyles, in order to unblind (and re-blind) myself I need to press the Stylish icon and press 'Deactivate' on the script? This might be a trivial inconvenience.

Comment author: DonyChristie 31 October 2017 05:49:08PM 12 points [-]

To what extent have you (whoever's in charge of CHS) talked with the relevant AI Safety organizations and people?

To what extent have you researched the technical and strategic issues, respectively?

What is CHS's comparative advantage in political mobilization and advocacy?

What do you think the risks are to political mobilization and advocacy, and how do you plan on mitigating them?

If CHS turned out to be net harmful rather than net good - what process would discover that, and what would the result be?

View more: Next