In response to comment by William_S on Open Thread #38
Comment author: rhys_lindmark 24 August 2017 03:40:41PM *  2 points [-]

Nice link! I think there's worthwhile research to be done here to get a more textured ITN.

On Impact—Here's a small example of x-risk (nuclear threat coming from inside the White House): https://www.vanityfair.com/news/2017/07/department-of-energy-risks-michael-lewis.

On Neglectedness—Thus far it seems highly neglected, at least at a system-level. hifromtheotherside.com is one of the only projects I know in the space (but the founder is not contributing much time to it)

On Tractability—I have no clue. Many of these "bottom up"/individual-level solution spaces seem difficult and organic (though we would pattern match from the spread of the EA movement).

  1. There's a lot of momentum in this direction (the public is super aware of the problem). Whenever this happens, I'm tempted by pushing an EA mindset "outcome-izing/RCT-ing" the efforts in the space. So even if it doesn't score highly on Neglectedness, we could attempt to move the solutions towards more cost-effective/consequentialist solutions.
  2. This is highly related to the timewellspent.io movement that Tristan Harris (who was at EAGlobal) is pushing.
  3. I feel like we need to differentiate between the "political-level" and the "community-level".
  4. I'm tempted to think about this from the "communities connect with communities" perspective. i.e The EA community is the "starting node/community" and then we start more explicitly collaborating/connecting with other adjacent communities. Then we can begin to scale a community connection program through adjacent nodes (likely defined by n-dimensional space seen here http://blog.ncase.me/the-other-side/).
  5. Another version of this could be "scale the CFAR community".
  6. I think this could be related to Land Use Reform (https://80000hours.org/problem-profiles/land-use-reform/) and how we construct empathetic communities with a variety of people. (Again, see Nicky Case — http://ncase.me/polygons/)
Comment author: William_S 28 August 2017 05:00:44PM 0 points [-]

Thanks for the Nicky Case links

In response to Open Thread #38
Comment author: William_S 23 August 2017 05:21:43PM 3 points [-]

Any thoughts on individual-level political de-polarization in the United States as a cause area? It seems important, because a functional US government helps with a lot of things, including x-risk. I don't know whether there are tractable/neglected approaches in the space. It seems possible that interventions on individuals that are intended to reduce polarization and promote understanding of other perspectives, as opposed to pushing a particular viewpoint or trying to lobby politicians, could be neglected. http://web.stanford.edu/~dbroock/published%20paper%20PDFs/broockman_kalla_transphobia_canvassing_experiment.pdf seems like a useful study in this area (it seems possible that this approach could be used for issues on the other side of the political spectrum)

Comment author: turchin 11 August 2016 09:07:09PM 0 points [-]

But if we stop emissions now GW will probably continue to exist for around 1000 years as I read somewhere, and even could jump because cooling effects of soot will stop.

Global coordination problems also exist, but may be not so annoying. In first case punishment comes for non-cooperation, and in second - for actions, and actions always seems to be more punishable.

Comment author: William_S 12 August 2016 12:33:53AM *  0 points [-]

I'm not saying these mean we shouldn't do geoengineering, that they can't be solved or that they will happen by default, just that these are additional risks (possibly unlikely but high impact) that you ought to include in your assessment and we ought to make sure that we avoid.

Re coordination problems not being bad: It's true that they might work out, but there's significant tail risk. Just imagine that say, the US unilaterally decides to do geonengineering, but it screws up food production and the economy in China. This probably increases chances of nuclear war (even more so than if climate change does it indirectly, as there will be a more specific, attributable event). It's worth thinking about how to prevent this scenario.

Comment author: William_S 11 August 2016 08:51:33PM *  1 point [-]

Extra risks from geoengineering:

Cause additional climate problems (ie. it doesn't just uniformly cool planet. I recall seeing a simulation somewhere where climate change + geoengineering did not equal no change, but instead significantly changed rainfall patterns).

Global coordination problems (who decides how much geoengineering to do, compensation for downside, etc.). This could cause a significant increase in international tensions, plausibly war.

Climate Wars by Gwynne Dyer has some specific negative scenarios (for climate change + geoengineering) https://www.amazon.com/Climate-Wars-Fight-Survival-Overheats/dp/1851688145

Comment author: William_S 16 January 2016 06:59:52PM 1 point [-]

It might be useful to suggest Technology for Good as, ie, a place where companies with that focus could send job postings, and have them seen by people who are interested in working on such projects.

Comment author: William_S 16 January 2016 06:58:12PM 1 point [-]

This is probably not answerable until you've made some significant progress in your current focus, but it would be nice to get a sense of how well the pool of people available to work on technology for good projects lines up with the skills required for those problems (for example, are there a lot of machine learning experts who are willing to work on these problems, but not many projects where that is the right solution? Is there a shortage of, say, front-end web developers who are willing to work on these kinds of projects?).

Comment author: DavidMoss 25 August 2015 07:33:32PM 7 points [-]

I agree with all this Peter.

One problem I think especially worth highlighting is not exactly the added "risk" of added meta-layers, but over-determination: i.e. you have a lot of people, orgs and even backgrounds news articles and culture floating around, all persuading people to get involved in EA, it's very hard to know how much any of them are contributing.

Comment author: William_S 25 August 2015 09:06:16PM *  6 points [-]

Another way of thinking about this is that in an overdetermined environment it seems like there would be a point at which the impact of EA movement building will be "causing a person to join EA sooner" instead of "adding another person to EA" (which is the current basis for evaluating EA movement building impact), which would be much less valuable.

Comment author: William_S 25 August 2015 07:25:05PM 5 points [-]

What sort of feedback signals would we get if EA was currently falling into a meta-trap? What is the current state of those signals?

Comment author: William_S 24 August 2015 08:12:16PM 1 point [-]

In response to this article, I followed the advice in 1) and thought about where I'd donate in the animal suffering cause area, ending up donating $20 to New Harvest.

Comment author: William_S 20 August 2015 01:43:31AM 2 points [-]

Idea: allow people to sign up to a list. Then, every (week/2 weeks/month) randomly pair up all people on the list and suggest they have a short Skype conversation with the person they are paired with.

View more: Next