Comment author: Jacy_Reese 22 February 2018 07:02:31PM *  2 points [-]

Hm, yeah, I don't think I fully understand you here either, and this seems somewhat different than what we discussed via email.

My concern is with (2) in your list. "[T]hey do not wish to be convinced to expand their moral circle" is extremely ambiguous to me. Presumably you mean they -- without MCE advocacy being done -- wouldn't put in wide-MC* values or values that lead to wide-MC into an aligned AI. But I think it's being conflated with, "they actively oppose" or "they would answer 'no' if asked, 'Do you think your values are wrong when it comes to which moral beings deserve moral consideration?'"

I think they don't actively oppose it, they would mostly answer "no" to that question, and it's very uncertain if they will put the wide-MC-leading values into an aligned AI. I don't think CEV or similar reflection processes reliably lead to wide moral circles. I think they can still be heavily influenced by their initial set-up (e.g. what the values of humanity when reflection begins).

This leads me to think that you only need (2) to be true in a very weak sense for MCE to matter. I think it's quite plausible that this is the case.

*Wide-MC meaning an extremely wide moral circle, e.g. includes insects, small/weird digital minds.

Comment author: William_S 26 February 2018 03:50:12AM 1 point [-]

I don't think CEV or similar reflection processes reliably lead to wide moral circles. I think they can still be heavily influenced by their initial set-up (e.g. what the values of humanity when reflection begins).

Why do you think this is the case? Do you think there is an alternative reflection process (either implemented by an AI, by a human society, or combination of both) that could be defined that would reliably lead to wide moral circles? Do you have any thoughts on what would it look like?

If we go through some kind of reflection process to determine our values, I would much rather have a reflection process that wasn't dependent on whether or not MCE occurred before hand, and I think not leading to a wide moral circle should be considered a serious bug in any definition of a reflection process. It seems to me that working on producing this would be a plausible alternative or at least parallel path to directly performing MCE.

Comment author: William_S 06 November 2017 06:29:44PM 2 points [-]

I've talked to Wyatt and David, afterwards I am more optimistic that they'll think about downside risks and be responsive to feedback on their plans. I wasn't convinced that the plan laid out here is a useful direction, but we didn't dig into it into enough depth for me to be certain.

Comment author: WyattTessari 31 October 2017 06:20:37PM *  5 points [-]

Hi Dony,

Great questions! My name is Wyatt Tessari and I am the founder.

1) We are doing that right now. Consultations is a top priority for us before we start our advocacy efforts. It's also part of the reason we're reaching out here.

2) Our main comparative advantage is that (to the best of our research) there is no one else in the political/advocacy sphere openly talking about the issue in Canada. If there are better organisations than us, where are they? We'd gladly join or collaborate with them.

3) There are plenty of risks - causing fear or misunderstanding, getting hijacked by personalities or adjacent causes, causing backlash or counterproductive behaviour - but the reality is they exist anyway. The general public will eventually clue in to the stakes around ASI and AI safety and the best we can do is get in early in the debate, frame it as constructively as possible, and provide people with tools (petitions, campaigns) that will be an effective outlet for their concerns.

4) This is a tough question. There would likely be a number of metrics - feedback from AI & governance experts, popular support (or lack thereof), and a healthy dose of ongoing critical thought. But if you (or anyone else reading this) has better ideas we'd love to hear them.

In any case, thanks again for your questions and we'd love to hear more (that's how we're hoping to grow...).

Comment author: William_S 31 October 2017 08:50:01PM *  8 points [-]

Seems like the main argument here is that: "The general public will eventually clue in to the stakes around ASI and AI safety and the best we can do is get in early in the debate, frame it as constructively as possible, and provide people with tools (petitions, campaigns) that will be an effective outlet for their concerns."

One concern about this is that "getting in early in the debate" might move up the time that the debate happens or becomes serious, which could be harmful.

An alternative approach would be to simply build latent capacity - work on issues that are already in the political domain (I think basic income as a solution for technological employment is something that is already out there in Canada), but avoid raising new issues until other groups move into that space too. While you're doing that, you could build latent capacity (skills, networks) and learn how to effectively advocate in spaces that don't carry the same risks of prematurely politicizing AI related issues. Then when something related to AI becomes a clear goal for policy advocacy, moving onto it at the right time.

In response to comment by William_S on Open Thread #38
Comment author: rhys_lindmark 24 August 2017 03:40:41PM *  2 points [-]

Nice link! I think there's worthwhile research to be done here to get a more textured ITN.

On Impact—Here's a small example of x-risk (nuclear threat coming from inside the White House): https://www.vanityfair.com/news/2017/07/department-of-energy-risks-michael-lewis.

On Neglectedness—Thus far it seems highly neglected, at least at a system-level. hifromtheotherside.com is one of the only projects I know in the space (but the founder is not contributing much time to it)

On Tractability—I have no clue. Many of these "bottom up"/individual-level solution spaces seem difficult and organic (though we would pattern match from the spread of the EA movement).

  1. There's a lot of momentum in this direction (the public is super aware of the problem). Whenever this happens, I'm tempted by pushing an EA mindset "outcome-izing/RCT-ing" the efforts in the space. So even if it doesn't score highly on Neglectedness, we could attempt to move the solutions towards more cost-effective/consequentialist solutions.
  2. This is highly related to the timewellspent.io movement that Tristan Harris (who was at EAGlobal) is pushing.
  3. I feel like we need to differentiate between the "political-level" and the "community-level".
  4. I'm tempted to think about this from the "communities connect with communities" perspective. i.e The EA community is the "starting node/community" and then we start more explicitly collaborating/connecting with other adjacent communities. Then we can begin to scale a community connection program through adjacent nodes (likely defined by n-dimensional space seen here http://blog.ncase.me/the-other-side/).
  5. Another version of this could be "scale the CFAR community".
  6. I think this could be related to Land Use Reform (https://80000hours.org/problem-profiles/land-use-reform/) and how we construct empathetic communities with a variety of people. (Again, see Nicky Case — http://ncase.me/polygons/)
Comment author: William_S 28 August 2017 05:00:44PM 0 points [-]

Thanks for the Nicky Case links

In response to Open Thread #38
Comment author: William_S 23 August 2017 05:21:43PM 3 points [-]

Any thoughts on individual-level political de-polarization in the United States as a cause area? It seems important, because a functional US government helps with a lot of things, including x-risk. I don't know whether there are tractable/neglected approaches in the space. It seems possible that interventions on individuals that are intended to reduce polarization and promote understanding of other perspectives, as opposed to pushing a particular viewpoint or trying to lobby politicians, could be neglected. http://web.stanford.edu/~dbroock/published%20paper%20PDFs/broockman_kalla_transphobia_canvassing_experiment.pdf seems like a useful study in this area (it seems possible that this approach could be used for issues on the other side of the political spectrum)

Comment author: turchin 11 August 2016 09:07:09PM 0 points [-]

But if we stop emissions now GW will probably continue to exist for around 1000 years as I read somewhere, and even could jump because cooling effects of soot will stop.

Global coordination problems also exist, but may be not so annoying. In first case punishment comes for non-cooperation, and in second - for actions, and actions always seems to be more punishable.

Comment author: William_S 12 August 2016 12:33:53AM *  0 points [-]

I'm not saying these mean we shouldn't do geoengineering, that they can't be solved or that they will happen by default, just that these are additional risks (possibly unlikely but high impact) that you ought to include in your assessment and we ought to make sure that we avoid.

Re coordination problems not being bad: It's true that they might work out, but there's significant tail risk. Just imagine that say, the US unilaterally decides to do geonengineering, but it screws up food production and the economy in China. This probably increases chances of nuclear war (even more so than if climate change does it indirectly, as there will be a more specific, attributable event). It's worth thinking about how to prevent this scenario.

Comment author: William_S 11 August 2016 08:51:33PM *  1 point [-]

Extra risks from geoengineering:

Cause additional climate problems (ie. it doesn't just uniformly cool planet. I recall seeing a simulation somewhere where climate change + geoengineering did not equal no change, but instead significantly changed rainfall patterns).

Global coordination problems (who decides how much geoengineering to do, compensation for downside, etc.). This could cause a significant increase in international tensions, plausibly war.

Climate Wars by Gwynne Dyer has some specific negative scenarios (for climate change + geoengineering) https://www.amazon.com/Climate-Wars-Fight-Survival-Overheats/dp/1851688145

Comment author: William_S 16 January 2016 06:59:52PM 1 point [-]

It might be useful to suggest Technology for Good as, ie, a place where companies with that focus could send job postings, and have them seen by people who are interested in working on such projects.

Comment author: William_S 16 January 2016 06:58:12PM 1 point [-]

This is probably not answerable until you've made some significant progress in your current focus, but it would be nice to get a sense of how well the pool of people available to work on technology for good projects lines up with the skills required for those problems (for example, are there a lot of machine learning experts who are willing to work on these problems, but not many projects where that is the right solution? Is there a shortage of, say, front-end web developers who are willing to work on these kinds of projects?).

Comment author: DavidMoss 25 August 2015 07:33:32PM 7 points [-]

I agree with all this Peter.

One problem I think especially worth highlighting is not exactly the added "risk" of added meta-layers, but over-determination: i.e. you have a lot of people, orgs and even backgrounds news articles and culture floating around, all persuading people to get involved in EA, it's very hard to know how much any of them are contributing.

Comment author: William_S 25 August 2015 09:06:16PM *  6 points [-]

Another way of thinking about this is that in an overdetermined environment it seems like there would be a point at which the impact of EA movement building will be "causing a person to join EA sooner" instead of "adding another person to EA" (which is the current basis for evaluating EA movement building impact), which would be much less valuable.

View more: Next