Comment author: Michelle_Hutchinson 05 September 2017 09:00:44AM *  0 points [-]

Thanks for clarifying.

The claim you're defending is that the Bay is an outlier in terms of the percentage of people who think AI is the top priority. But what the paragraph I quoted says is 'favoring a cause area outlier' - so 'outlier' is picking out AI amongst causes people think are important. Saying that the Bay favours AI which is an outlier amongst causes people favour is a stronger claim than saying that the Bay is an outlier in how much it favours AI. The data seems to support the latter but not the former.

Comment author: Peter_Hurford  (EA Profile) 05 September 2017 02:36:06AM 0 points [-]

This is true and will be fixed. Sorry.

Comment author: Peter_Hurford  (EA Profile) 05 September 2017 02:35:42AM 0 points [-]

I'm having trouble interpreting the first graph. It looks like 600 people put poverty as the top cause, which you state is 41% of respondents, and that 500 people put cause prioritisation, which you state is 19% of respondents.

I can understand why you're having trouble interpreting the first graph, because it is wrong. It looks like in my haste to correct the truncated margin problem, I accidentally put a graph for "near top priority" instead of "top priority". I will get this fixed as soon as possible. Sorry. :(

We will have to re-explore the aggregation and disaggregation with an updated graph. With 237 people saying AI is the top priority and 150 people saying non-AI far future is the top priority versus 601 saying global poverty is the top priority, global poverty still wins. Sorry again for the confusion.

-

The term 'outlier' seems false according to the stats you cite

The term "outlier" here is meant in the sense of a statistically significant outlier, as in it is statistically significantly more in favor of AI than all other areas. 62% of people in the Bay think AI is the top priority or near the top priorities compared to 44% of people elsewhere (p < 0.00001), so it is a difference of a majority versus non-majority as well. I think this framing makes more sense when the above graph issue is corrected -- sorry.

Looking at it another way, The Bay contains 3.7% of all EAs in this survey, but 9.6% of all EAs in the survey who think AI is the top priority.

Comment author: Wei_Dai 04 September 2017 11:37:52PM 1 point [-]

I'm also worried about the related danger of AI persuasion technology being "democratically" deployed upon open societies (i.e., by anyone with an agenda, not necessarily just governments and big corporations), with the possible effect that in the words of Paul Christiano, "we’ll live to see a world where it’s considered dicey for your browser to uncritically display sentences written by an untrusted party." This is arguably already true today for those especially vulnerable to conspiracy theories, but eventually will affect more and more people as the technology improves. How will we solve our collective problems when the safety of discussions are degraded to such an extent?

Comment author: kbog  (EA Profile) 04 September 2017 08:18:06PM 0 points [-]

inverse reinforcement learning could allow AI systems to learn to model the current preferences and likely media reactions of populations, allowing new AI propaganda systems to pre-test ideological messaging with much more accuracy, shaping gov't 'talking points', policy rationales, and ads to be much more persuasive.

The same can be said for messages which come from non-government sources. Governments have always had an advantage in resources and laws, so they've always had the high ground in information warfare/propaganda, but at the same time dissenting ideas are frequently spread. I don't see why the balance would be shifted.

Likewise, the big US, UK, EU media conglomerates could weaponize AI ideological engineering systems to shape more effective messaging in their TV, movies, news, books, magazines, music, and web sites -- insofar as they have any ideologies to promote.

Likewise, the same reasoning goes for small and independent media and activist groups.

Compared to other AI applications, suppressing 'wrong-think' and promoting 'right-think' seems relatively easy. It requires nowhere near AGI. Data mining companies such as Youtube, Facebook, and Twitter are already using semi-automatic methods to suppress, censor, and demonetize dissident political opinions. And governments have strong incentives to implement such programs quickly and secretly, without any public oversight (which would undermine their utility by empowering dissidents to develop counter-strategies). Near-term AI ideological control systems don't even have to be as safe as autonomous vehicles, since their accidents, false positives, and value misalignments would be invisible to the public, hidden deep within the national security state.

Yeah, it is a problem, though I don't think I would classify it as AI safety. The real issue is one of control and competition. Youtube is effectively a monopoly and Facebook/Twitter are sort of a duopoly, and all of them are in the same Silicon Valley sphere with the same values and goals. Alternatives have little chance of success because of a combination of network effects and the 'Voat Phenomenon' (any alternative platform to the default platform will first attract the extreme types who were the first people to be ostracized by the main platform, so that the alternative platform will forever have a repulsive core community and a tarnished reputation). I'm sure AI can be used as a weapon to either support or dismantle the strength of these institutions; it seems better to approach it from a general perspective than just as an AI one.

Comment author: Michelle_Hutchinson 04 September 2017 02:40:19PM *  3 points [-]

I'm having trouble interpreting the first graph. It looks like 600 people put poverty as the top cause, which you state is 41% of respondents, and that 500 people put cause prioritisation, which you state is 19% of respondents.

The article in general seems to put quite a bit of emphasis on the fact that poverty came out as the most favoured cause. Yet while 600 people said it was the top cause, according to the graph around 800 people said that long run future was the top cause (AI + non-AI far future). It seems plausible to disaggregate AI and non-AI long run future, but at least as plausible to aggregate them (given the aggregation of health / education / economic interventions in poverty), and conclude that most EAs think the top cause is improving the long-run future. Although you might have been allowing people to pick multiple answers, and found that most people who picked poverty picked only that, and most who picked AI / non-AI FF picked both?

The following statement appears to me rather loaded: "For years, the San Francisco Bay area has been known anecdotally as a hotbed of support for artificial intelligence as a cause area. Interesting to note would be the concentration of EA-aligned organizations in the area, and the potential ramifications of these organizations being located in a locale heavily favoring a cause area outlier." The term 'outlier' seems false according to the stats you cite (over 40% of respondents outside the Bay thinking AI is a top or near top cause), and particularly misleading given the differences made here by choices of aggregation. (Ie. that you could frame it as 'most EAs in general think that long-run future causes are most important; this effect is a bit stronger in the Bay)

Writing on my own behalf, not my employer's.

Comment author: Austen_Forrester 04 September 2017 02:25:08PM 1 point [-]

For "far future"/"long term future," you're referring to existential risks, right? If so, I would think calling them existential or x-risks would be the most clear and honest term to use. Any systemic change affects the long term such as factory farm reforms, policy change, changes in societal attitudes, medical advances, environmental protection, etc, etc. I therefore don't feel it's that honest to refer to x-risks as "long term future."

Comment author: CalebWithers  (EA Profile) 04 September 2017 02:31:12AM 3 points [-]

It seems that the numbers in the top priority paragraph don't match up with the chart

Comment author: Daniel_Eth 03 September 2017 05:56:41AM *  0 points [-]

I'd imagine there are several reasons this question hasn't received as much attention as AGI Safety, but the main reasons are that it's both much lower impact and (arguably) much less tractable. It's lower impact because, as you said, it's not an existential risk. It's less tractable because even if we could figure out a technical solution, there are strong vested interests against applying the solution (as contrasted to AGI Safety, where all vested interests would want the AI to be aligned).

I'd imagine this sort of tech would actually decrease the risk from bioweapons etc for the same reason that I'd imagine it would decrease terrorism generally, but I could be wrong.

Regarding the US in particular, I'm personally much less worried about the corporations pushing their preferred ideologies than them using the tech to manipulate us into buying stuff and watching their media - companies tend to be much more focussed on profits than on pushing ideologies.

Comment author: Tee 02 September 2017 08:23:10PM 2 points [-]

09/02/17 Post Update: The previously truncated graphs "This cause is the top priority" and "This cause is the top or near top priority" have been adjusted in order to better present the data

Comment author: Tee 02 September 2017 08:20:41PM 3 points [-]

09/02/17 Update: We've updated the truncated graphs

Comment author: RyanCarey 02 September 2017 05:09:20PM *  6 points [-]

It does look like AI and deep learning will by default push toward greater surveillance, and greater power to intelligence agencies. It could supercharge passive surveillance of online activity, prediction of futuer crime, could make lie detection reliable.

But here's the catch. Year on year, AI and synthetic biology become more powerful and accessible. On the Yudkowsky-Moore law of mad science: "Every 18 months, the minimum IQ necessary to destroy the world drops by one point." How could we possibly expect to be headed toward a stably secure civilization, given that the destructive power of technologies is increasing more quickly than we are really able to adapt our institutions and our selves to deal with them? An obvious answer is that in a world where many can engineer a pandemic in their basement, we'll need to have greater online surveillance to flag when they're ordering a concerning combination of lab equipment, or to more sensitively detect homicidal motives.

On this view, the issue of ideological engineering from governments that are not acting in service of their people is one we're just going to have to deal with...

Another thought is that there will be huge effects from AI (like the internet in general) that come from corporations rather than government. Interacting with apps aggressively tuned for profit (e.g. a supercharged version of the vision described in the Time Well Spent video - http://www.timewellspent.io/) could - I don't know - increase the docility of the populace or have some other wild kind of effects.

Comment author: KevinWatkinson  (EA Profile) 02 September 2017 08:21:01AM *  0 points [-]

Thanks for your comment.

This is what ACE say in relation to the criterion.

“4. The charity possesses a strong track record of success. The charity has a record of successful achievement of incremental goals or of demonstrated progress towards larger goals. Note that this implies the charity has been in existence for some length of time. While very young charities may have strong potential to return large results for small initial amounts of funding, donating to charities without track records is inherently risky.”

I think it is reasonable to say that GFI has not been in existence for a particularly long time, having launched in 2016, and having been reviewed in 2016. Whatever other considerations might mitigate this issue, it still stands that the charity has been in existence for a very short period of time, and GFI did not possess a strong track record of success, and therefore it couldn’t in my view meet criteria four. But like I said in the article, I think there is room for flexibility with newer groups.

My post here asked the question whether we ought to think more before we donate to GFI, not that EAs shouldn’t want GFI to be fully funded, or necessarily any of the other groups that ACE recommend. As I said, I think it is highly unlikely GFI wouldn’t be, as they are viewed as such a good prospect. I would generally expect most people to agree that it would be a good idea to think more about the different issues that are related to funding, and I would expect very few people to agree that GFI shouldn’t be fully funded.

I personally don’t donate money to ACE, for some of the reasons i have stated and others that follow, but just like with GFI, it isn't that i wouldn't want to see it fully funded, but I think other EAs could consider the issues more, and it might be they think it is a less good idea to put as much money into ACE until certain issues are resolved.

Some EAs believe there are few issues, others believe there are more, i'm one of the people who believe there are more. In my view there are also reasons to believe that ACE have been underfunded for some years, as i believe scope should have been expanded, and more charities evaluated, but i am uncertain whether there has been much interest in resolving a number of these issues, partly because people weight them differently. Whilst I was in favour of Open Phil donating $500,000 to ACE this year, as a way to potentially resolve some issues, and i am not in favour of the $1m funding cap.

I would prefer that more EAs consider reasons for thinking differently about the situation in relation to donations overall, including whether or not to let larger philanthropic organisations do most of the funding of top groups, or just to let them to do it, and for EAs to look at a broader range of organisations outside the ‘mainstream’. Something which might have more appeal to people outside of EA, and that would need to be instigated from within EA. It’s not even an either/or situation in terms or evaluation, it would be possible to do both, if there was a desire to do this.

It’s true I’m not presently very satisfied with the process at ACE, and I think there are reasonable grounds that some other people might like to think differently about what to do in relation to that situation too. Incidentally, I would be in favour of independent and funded external meta-evaluation for all evaluation groups related to EA, and I see no reason why this shouldn’t be encouraged in order to improve the likelihood different issues are taken into account (that organisations might be missing) and to support evaluation groups to do the work they do. I regard it as incorporating a strategy to increase the likelihood different issues are fairly considered. It also gives reassurance to donors, and I see no reason not to put a system in place as a matter of best practice, or as is sometimes considered, better than best practice. This is something I have spoken about before with ACE, and I find the reasons to do it compelling, not least because it could add more legitimacy to the evaluation process.

-

On the issue of interventions, I also believe they need to include meta-evaluation. So what is the impact of say, vegan advocacy in relation to reducetarian advocacy? What is the impact of marginalising veganism to focus on ‘mainstreamness’? Or for saying we need to use the idea people love animals but hate vegans? I’m in favour of working out which interventions are effective, and within different approaches, not just comparisons between approaches to attempt to work out which one is ‘best’ (welfare or abolition). I would also like to see how ACE are considering the differences between top down and bottom up advocacy, social movements, ethical systems, and how ideas are represented or distorted within a mainstream / non-mainstream context. I think this could be something for the Experimental Research Division, and I think a good place to begin would be with foundational issues, with dialogue across the animal movement to establish where people are at with these forms of ideas.

It also wasn’t really my intention to suggest that Encompass or BEI fall outside the paradigm of abolition and welfare, but it is my belief the Food Empowerment Project do. They were all examples of groups I am more interested in, but I haven't spoken to either Encompass or BEI to know where they see themselves in relation to welfare / abolition (nor do i intend to at the present time).

The problem I am referring to by mentioning the dichotomy of welfare and abolition is that it doesn’t provide enough scope for different groups to fit in, if people reject the EA idea of welfare and also reject abolition, where do they go? Where are these different approaches generally explored within EA? I am not saying this doesn't happen at all, but it happens very little, and in a very marginal way. So i wonder where the curiosity largely exists in relation to what different people are doing in the animal movement outside the idea of 'welfare'? For me it looks a lot like larger organisations are being functionally rational within the movement, which is understandable to a degree, but i think this has impacted how evaluation works (I think Robert Jackall explores some of these issues in the book "Moral Mazes: The World of Corporate Managers." I also believe Jonathon Smucker maps some of the issues in his new book "Hegemony How-To".)

I also question whether ACE should use the abolitionist / welfare paradigm without really having completed a thorough consideration of its origins and implications. If this examination does however exist, i would welcome seeing it.

Without this work I disagree about the idea of a ‘welfare’ mindset for tractability. How has that been articulated? What are the alternative mindsets? Where are they considered and comparisons made? People are highly interested in doing effective advocacy and some people want to be consistent with their approach, and find that is a sound way to empower people with the knowledge to make changes, whilst others are more interested in marketing techniques.

If we are in favour of diversity then we need to acknowledge and understand different approaches, and find ways that improve the work different people do, rather than adopting a dichotomy of welfare / abolition and saying welfare is best and that everyone ought to do it if they want to be most effective. For example, if we are looking at issues of social justice and speciesism, then the framework we use reasonably ought to fit with other frameworks in relation to discrimination and oppression. However, if people want to do conventional welfare, or reducetarianism, then ok, but the limitations ought to be acknowledged, and how they relate considered. I don't think I have seen where organisations in EA have completed this type of work, where it has had cross movement input.

As a movement model I would probably consider something along the lines of the following, to more easily refer to different ideas in the animal movement and improve communication. Though i would consult broadly to get more ideas:

Welfare, new welfare.
Reducetarian, reducetarian animal rights.
Vegan, animal rights.
Abolitionist Approach.

Comment author: Peter_Hurford  (EA Profile) 02 September 2017 01:36:31AM 3 points [-]

Huh, I didn't even notice that either. Thanks for pointing that out. I agree that it's misleading and we can fix it.

Comment author: Robert_Wiblin 02 September 2017 01:20:51AM 3 points [-]

I didn't notice that when I first read this. It's especially easy to mis-read because the others aren't truncated. Strongly suggest editing to fix it.

Comment author: Austen_Forrester 01 September 2017 11:54:23PM -2 points [-]

I'm sure promoting killer robots will be popular among "effective altruists,"/ISIS, as it is a way to kill as many people as possible while making it look like an accident. "EAs" aren't fooling anyone about their true intentions.

Comment author: Buck 01 September 2017 11:48:09PM 6 points [-]

I wish that you hadn't truncated the y axis in the "Cause Identified as Near-Top Priority" graph. Truncating the y-axis makes the graph much more misleading at first glance.

Comment author: LewisBollard 01 September 2017 10:59:50PM *  4 points [-]

Thanks for raising this. I just want to clarify Open Phil’s policy on filling funding gaps. We look at each case and think about the pros and cons to ‘leaving space’ in a cost-benefit framework, which includes thinking about likely donor behavior in different cases. The ‘splitting’ policy applies to GiveWell top charities only; in other cases we often avoid being too high a % of someone’s budget, and are sometimes constrained by soft cause-level giving targets, but otherwise generally fill what we see as important funding gaps. It’s possible though not certain that we’ll fund GFI more - though if we do it won't be because GFI will "advocate on behalf of investments for philanthropists who also support Open Phil" — that's not a consideration I think about. I’d encourage potential donors to ask GFI what they’d do with more funds this year — I wouldn’t assume that ACE’s estimated room for more funding is still accurate.

Comment author: zdgroff 01 September 2017 10:08:49PM 3 points [-]

This seems like a more specific case of a general problem with nearly all research on persuasion, marketing, and advocacy. Whenever you do research on how to change people's minds, you increase the chances of mind control. And yet, many EAs seem to do this: at least in the animal area, a lot of research pertains to how we advocate, research that can be used by industry as well as effective animal advocates. The AI case is definitely more extreme, but I think it depends on a resolution to this problem.

I resolve the problem in my own head (as someone who plans on doing such research in the future) through the view that people likely to use the evidence most are the more evidence-based people (and I think there's some evidence of this in electoral politics) and that the evidence will likely pertain more to EA types than others (a study on how to make people more empathetic will probably be more helpful to animal advocates, who are trying to make people empathetic, than industry, which wants to reduce empathy). These are fragile explanations, though, and one would think an AI would be completely evidence-based and a priori have as much evidence available to it as those trying to resist would have available to them.

Also, this article on nationalizing tech companies to prevent unsafe AI may speak to this issue to some degree: https://www.theguardian.com/commentisfree/2017/aug/30/nationalise-google-facebook-amazon-data-monopoly-platform-public-interest

Comment author: WillPearson 01 September 2017 09:59:52PM 1 point [-]

I think this is part of the backdrop to my investigation into the normal computer control problem. People don't have control over their own computers. the bad actors that do get control could be criminals or a malicious state (or AIs).

View more: Next