Comment author: Gregory_Lewis 06 August 2018 08:46:04PM 3 points [-]

Thanks for posting this.

I don't think there are any other sources you're missing - at least, if you're missing them, I'm missing them too (and I work at FHI). I guess my overall feeling is these estimates are hard to make and necessarily imprecise: long-run large scale estimates (e.g. what was the likelihood of a nuclear exchange between the US and Russia between 1960 and 1970?) are still very hard to make ex post, leave alone ex ante.

One question might be how important further VoI is for particular questions. I guess the overall 'x risk chance' may have surprisingly small action relevance. The considerations about the relative importance of x-risk reduction seem to be fairly insensitive to 10^-1 or 10^-5 (at more extreme values, you might start having pascalian worries), and instead the discussion hinges on issues like tractability, pop ethics, etc.

Risk share seems more important (e.g. how much more worrying is AI than nuclear war?), yet these comparative judgements can be generally made in relative terms, without having to cash out the absolute values.

Comment author: Gregory_Lewis 05 August 2018 05:47:05PM 21 points [-]

[My views only]

Although few materials remain from the early days of Leverage (I am confident they acted to remove themselves from wayback, as other sites link to wayback versions of their old documents which now 404), there are some interesting remnants:

  • A (non-wayback) website snapshot from 2013
  • A version of Leverage's plan
  • An early Connection Theory paper

I think this material (and the surprising absence of material since) speaks for itself - although I might write more later anyway.

Per other comments, I'm also excited by the plan of greater transparency from Leverage. I'm particularly eager to find out whether they still work on Connection Theory (and what the current theory is), whether they addressed any of the criticism (e.g. 1, 2) levelled at CT years ago, whether the further evidence and argument mentioned as forthcoming in early documents and comment threads will materialise, and generally what research (on CT or anything else) have they done in the last several years, and when this will be made public.

Comment author: SiebeRozendal 23 July 2018 12:51:03PM *  4 points [-]

Speculative feature request: anonymous commenting and private commenting

Sometimes people might want to comment anonymously because they want to say something that could hurt their reputation or relationships, or affect the response to the criticism in an undesirable way. For example,. OpenPhil staff criticising a CEA or 80K post would have awkward dynamics because OpenPhil funds these organizations partly. Having an option to comment anonymously (but let the default be with names) will allow more free speech.

Relatedly, some comments could be marked as "only readable by the author", because it's a remark about sensitive information. For example, feedback on someone's writing style or a warning about information hazards when the warning itself is also an information hazard. A risk of this feature is that it will be overused, which reduces how much information is spread to all the readers.

Meta: not sure if this thread is the best for these feature requests, but I don't know where else :)

Comment author: Gregory_Lewis 23 July 2018 03:45:00PM 3 points [-]

Relatedly, some comments could be marked as "only readable by the author", because it's a remark about sensitive information. For example, feedback on someone's writing style or a warning about information hazards when the warning itself is also an information hazard. A risk of this feature is that it will be overused, which reduces how much information is spread to all the readers.

Forgive me if I'm being slow, but wouldn't private messages (already in the LW2 codebase) accomplish this?

Comment author: Dunja 20 July 2018 08:38:38PM *  1 point [-]

The problem with down-voting is that it allows for views to be dismissed without any argument provided. It's kind of bizarre to give a detailed explanation why you think X is Y, only to see someone has down-voted this without explaining a tad bit why they disagree (or why they "don't find it useful"). I just can't reconcile that approach with the idea of rational deliberation.

One solution would be to demand that every down-vote comes with a reason, to which the original poster can reply.

Comment author: Gregory_Lewis 21 July 2018 11:52:31AM 4 points [-]

One solution would be to demand that every down-vote comes with a reason, to which the original poster can reply.

This has been proposed a couple of times before (/removing downvotes entirely), and I get the sentiment than writing something and having someone 'drive-by-downvote' is disheartening/frustrating (it doesn't keep me up at night, but a lot of my posts and comments have 1-2 downvotes on them even if they end up net-positive, but I don't really have a steer as to what problem the downvoters wanted to highlight).

That said, I think this is a better cost to bear than erecting a large barrier for expressions of 'less of this'. I might be inclined to downvote some extremely long and tendentious line-by-line 'fisking' criticism, without having to become the target of a similar reply myself by explaining why I downvoted it. I also expect a norm of 'explaining your reasoning' will lead to lots of unedifying 'rowing with the ref' meta-discussions ("I downvoted your post because of X"/ "How dare you, that's completely unreasonable! So I have in turn downvoted your reply!")

Comment author: Larks 11 July 2018 10:55:08PM 3 points [-]

(quoting from the open thread)

The timber is sold after 10 years, conservative return to the investor is $20k

This kind of investment would be considered high risk - this company only started this program three years ago, and the first trees haven't yet produced profit.

This sounds extremely suspect. Conservative investments do not generate 23% CAGRs, and there are plenty of investors willing to fund credible 10 year projects. Timber was a particularly fashionable asset class for a while, and 'enviromental' investments are extremely fashionable right now.

[This is an opinion and is for information purposes only. It is not intended to be investment advice. You should consult a licensed financial advisor for investment advice. This is not the opinion of my firm. My firm may have positions in the discussed securities. This is not an invitation to buy or sell securities].

Comment author: Gregory_Lewis 12 July 2018 01:09:06AM 3 points [-]

I'd also guess the social impact estimate would regress quite a long way to the mean if it was investigated to a similar level of depth as something like Cool Earth.

Comment author: Gregory_Lewis 11 July 2018 03:05:26PM 4 points [-]

One key challenge I see is something like 'grant-making talent constraint'. The skills needed to make good grants (e.g. good judgement, domain knowledge, maybe tacit knowledge, maybe relevant network, possibly commissioning/governance/operations skill) are not commonplace, and hard to explicitly 'train' outside i) having a lot of money of your own to practise with, or ii) working in a relevant field (so people might approach you for advice). (Open Philanthropy's recent hiring round might provide another route, but places were limited and extraordinarily competitive).

Yet the talents needed to end up at (i) or (ii) are somewhat different, as are the skills to acquire: neither (e.g.) having a lot of money and being interested in AI safety, nor being an AI safety researcher oneself, guarantee making good AI safety grants; time one spends doing either of these things is time one cannot dedicate to gaining grant-making experience.

Dividing this labour (as the suggestions in the OP point towards) seem the way to go. Yet this can only get you so far if 'grantmaking talent' is not only limited among people with the opportunity to make grants, but limited across the EA population in general. Further, good grant-makers will gravitate to the largest pools of funding (reasonably enough, as this is where their contribution has the greatest leverage). This predictably leads to gaps in the funding ecosystem where 'good projects from the point of view of the universe' and 'good projects from the point of view of the big funders' subtly differ: I'm not sure I agree with the suggestions in the OP (i.e. upskilling people, new orgs), but I find Carl Shulman's remarks here persuasive.

Comment author: remmelt  (EA Profile) 10 July 2018 12:19:26PM *  1 point [-]

First off, I was ambiguous in that paragraph about the level I actually thought decisions should be revised or radically altered. i.e. in say the next 20 years, did I think OpenPhil should revise most of the charities they fund, most of the specific problems they funded or broad focus areas? I think I ended up just expressing a vague sense of ‘they should change their decisions a lot if they put in much more of the community’s brainpower into analysing data from a granular level upwards’.

So I appreciate that you actually gave specific reasons for why you'd be surprised to see a new focus area being taken up by people in the EA community in the next 10 years! Your arguments make sense to me and I’m just going to take up your opinion here.

Interestingly, your interpretation that this is evidence for that there shouldn't be a radical alteration in what causes we focus can be seen both as an outside view and inside view. It's an outside view in the sense that it weights the views of people who've decided to move into the direction of working on the long term future. It's also an inside view in that it doesn't consider roughly what percentage of past cosmopolitan movements where members converged on working on a particular set of problems were seen as wrong by their successors decades later (and perhaps judged to have been blinded by some of the social dynamics you mentioned: groupthink, information cascades and selection effects).

A historical example where this went wrong is how in the 1920's Bertrand Russell and other contemporary intelligentia had positive views on communism and eugenics, which later failed in practice under Stalin's authoritarian regime and Nazi Germany, respectively. Although I haven't done a survey of other historical movements (has anyone compiled such a list?), I think I still feel slightly more confident than you that we'll radically alter what we'll work on after 20 years if we'd make a concerted effort now to structure the community around enabling a significant portion of our 'members' (say 30%) to work together to gather, analyse and integrate data at each level (whatever that means).

It does seems that we share some intuitions (e.g. the arguments for valuing future generations similarly to current generations seem solid to me). I've made a quick list on research that could lead to fundamental changes in what we prioritise at various levels. I'd be curious to hear if any of these points has caused you to update any of your other intuitions:

Worldviews

  • more neuroscience and qualia research, possibly causing fundamental shifts in our views on how we feel and register experiences

  • research into how different humans trade off suffering and eudaimonia differently

  • a much more nuanced understanding of what psychological needs and cognitive processes lead to moral judgements (e.g. the effect on psychological distance on deontologist vs. consequentialist judgements and scope sensitivity)

Focus areas:

Global poverty

  • use of better metrics for wellbeing – e.g. life satisfaction scores and future use of real-time tracking of experiential well-being – that would result in certain interventions (e.g. in mental health) being ranked higher than others (e.g. malaria)

  • use of better approaches to estimate environmental interactions and indirect effects, like complexity science tools, which could result in more work being done on changing larger systems through leverage points

Existential risk

  • more research on how to avoid evolutionary/game theoretical “Moloch” dynamics instead of the current "Maxipok" focus on ensuring that future generations will live and hope that they have more information to assess and deal with problems from there

  • for AI safety specifically, I could see a shift in focus from a single agent produced out of say a lab that presumably gets so powerful to outflank all other agents to analysing systems of more similarly capable agents owned by wealthy individuals and coalitions that interact with each other (e.g. like Robin Hanson's work on Ems) or perhaps more research on how a single agent could be made out of specialised sub-agents representing the interests of various beings. I could also see a shift in focus to assessing and ensuring the welfare of sentient algorithms themselves.

Animal welfare

  • more research on assessing sentience, including that of certain insects, plants and colonial ciliates that do more complex information processing, leading to changed views on what species to target

  • shift to working on wild animal welfare and ecosystem design, with more focus on marine ecosystems

Community building

  • Some concepts like high-fidelity spreading of ideas and strongly valuing honesty and considerateness seem robust

  • However, you could see changes like emphasising the integration of local data, the use of (shared) decision-making algorithms and a shift away from local events and coffee chats to interactions on online (virtual) platforms

Comment author: Gregory_Lewis 11 July 2018 06:14:39AM 1 point [-]

I agree history generally augurs poorly for those who claim to know (and shape) the future. Although there are contrasting positive examples one can give (e.g. the moral judgements of the early Utilitarians were often ahead of their time re. the moral status of women, sexual minorities, and animals), I'm not aware of a good macrohistorical dataset that could answer this question - reality in any case may prove underpowered.

Yet whether or not in fact things would change with more democratised decision-making/intelligence gathering/ etc., it remains an open question whether this would be a better approach. Intellectual progress in many areas is no longer an amateur sport (see academia, cf. ongoing professionalisation of many 'bits' of EA, see generally that many important intellectual breakthroughs have historically been made by lone figures or small groups versus more swarm-intelligence-esque methods), and there's a 'clownside' risk of lot of enthusiastic, well-meaning, but inexperienced people making attempts that add epistemic heat rather than light (inter alia). The bar to appreciate 'X is an important issue' may be much lower than 'can contribute usefully to X'.

A lot seems to turn on whether the relevant problems are more high serial depth (favouring intensive effort) high threshold (favouring potentially-rare ability) or broader and relatively shallower, favouring parallelization. I'd guess relevant 'EA open problems' are a mix, but this makes me hesitant for there to be a general shove in this direction.

I have mixed impressions about the items you give below (which I appreciate was meant more as quick illustration than some 'research agenda for the most important open problems in EA'). Some I hold resilient confidence the underlying claim is false, for more I am uncertain yet I suspect progress on answering these questions (/feel we could punt on these for our descendants to figure out in the long reflection). In essence, my forecast is that this work would expectedly tilt the portfolios, but not so much to be (what I would call) a 'cause X' (e.g. I can imagine getting evidence which suggests we should push more of a global health portfolio to mental health - or non-communicable disease - but not something as decisive where we think we should sink the entire portfolio there and withdraw from AMF/SCI/etc.)

Comment author: remmelt  (EA Profile) 04 July 2018 06:57:57AM *  0 points [-]

I appreciate you mentioning this! It’s probably not a minor point because if taken seriously, it should make me a lot less worried about people in the community getting stuck in ideologies.

I admit I haven’t thought this through systematically. Let me mull over your arguments and come back to you here.

BTW, could you perhaps explain what you meant with the “There are other causes of an area...” sentence? I’m having trouble understanding that bit.

And with ‘on-reflection moral commitments’ do you mean considerations like population ethics and trade-offs between eudaimonia and suffering?

Comment author: Gregory_Lewis 04 July 2018 10:14:18AM 1 point [-]

Sorry for being unclear. I've changed the sentence to (hopefully) make it clearer. The idea was there could be other explanations for why people tend to gravitate to future stuff (group think, information cascades, selection effects) besides the balance of reason weighs on its side.

I do mean considerations like population ethics etc. for the second thing. :)

Comment author: Gregory_Lewis 03 July 2018 11:38:20PM *  3 points [-]

Excellent work. I hope you'll forgive me taking issue with a smaller point:

Given the uncertainty they are facing, most of OpenPhil's charity recommendations and CEA's community-building policies should be overturned or radically altered in the next few decades. That is, if they actually discover their mistakes. This means it's crucial for them to encourage more people to do local, contained experiments and then integrate their results into more accurate models. (my emphasis)

I'm not so sure that this is true, although it depends on how big an area you imagine will / should be 'overturned'. This also somewhat ties into the discussion about how likely we should expect to be missing a 'cause X'.

If cause X is another entire cause area, I'd be pretty surprised to see a new one in (say) 10 years which is similar to animals or global health, and even more surprised to see one that supplants long term future. My rationale for this is I see broad funnel where EAs tend to move into the long term future/x-risk/AI, and once there they tend not to leave (I can think of a fair number of people who made the move from (e.g.) global health --> far future, but I'm not aware of anyone who moved from far future --> anything else). There are also people who have been toiling in the long term future vinyard for a long time (e.g. MIRI), and the fact we do not see many people moving elsewhere suggests this is pretty stable attractor.

There are other reasons for a cause area being a stable attractor besides all reasonable roads lead to it. That said, I'd suggest one can point to general principles which would somewhat favour this (e.g. the scope of the long term future, that the light cone commons, stewarded well, permits mature moral action in the universe to whatever in fact has most value, etc.) I'd say similar points to a lesser degree to apply to the broad landscape of 'on reflection moral commitments', and so the existing cause areas mostly exhaust this moral landscape.

Naturally, I wouldn't want to bet the farm on what might prove overconfidence, but insofar as it goes it supplies less impetus for lots of exploratory work of this type. At a finer level of granulariy (and so a bit further down your diagram), I see less resilience (e.g. maybe we should tilt the existing global poverty portfolio more one way or the other depending how the cash transfer literature turns out, maybe we should add more 'avoid great power conflict' to the long term future cause area, etc.) Yet I still struggle to see this adding up to radical alteration.

Comment author: Gregory_Lewis 03 July 2018 10:45:21PM 5 points [-]

Thanks for writing this. How best to manage hazardous information is fraught, and although I have some work in draft and under review, much remains unclear - as you say, almost anything could have some some downside risk, and never discussing anything seems a poor approach.

Yet I strongly disagree with the conclusion that the default should be to discuss potentially hazardous (but non-technical) information publicly, and I think your proposals of how to manage these dangers (e.g. talk to one scientist first) generally err too lax. I provide the substance of this disagreement in a child comment.

I’d strongly endorse a heuristic along the lines of, “Try to avoid coming up with (and don’t publish) things which are novel and potentially dangerous”, with the standard of novelty being a relatively uninformed bad actor rather than an expert (e.g. highlighting/elaborating something dangerous which can be found buried in the scientific literature should be avoided).

This expressly includes more general information as well as particular technical points (e.g. “No one seems to be talking about technology X, but here’s why it has really dangerous misuse potential” would ‘count’, even if a particular ‘worked example’ wasn’t included).

I agree it would be good to have direct channels of communication for people considering things like this to get advice on whether projects they have in mind are wise to pursue, and to communicate concerns they have without feeling they need to resort to internet broadcast (cf. Jan Kulveit’s remark).

To these ends, people with concerns/questions of this nature are warmly welcomed and encouraged to contact me to arrange further discussion.

Comment author: Gregory_Lewis 03 July 2018 10:47:53PM 4 points [-]

0: We agree potentially hazardous information should only be disclosed (or potentially discovered) when the benefits of disclosure (or discovery) outweigh the downsides. Heuristics can make principles concrete, and a rule of thumb I try to follow is to have a clear objective in mind for gathering or disclosing such information (and being wary of vague justifications like ‘improving background knowledge’ or ‘better epistemic commons’) and incur the least possible information hazard in achieving this.

A further heuristic which seems right to me is one should disclose information in the way that maximally disadvantages bad actors versus good ones. There are a wide spectrum of approaches that could be taken that lie between ‘try to forget about it’, and ‘broadcast publicly’, and I think one of the intermediate options is often best.

1: I disagree with many of the considerations which push towards more open disclosure and discussion.

1.1: I don’t think we should be confident there is little downside in disclosing dangers a sophisticated bad actor would likely rediscover themselves. Not all plausible bad actors are sophisticated: a typical criminal or terrorist is no mastermind, and so may not make (to us) relatively straightforward insights, but could still ‘pick them up’ from elsewhere.

1.2: Although a big fan of epistemic modesty (and generally a detractor of ‘EA exceptionalism’), EAs do have an impressive track record in coming up with novel and important ideas. So there is some chance of coming up with something novel and dangerous even without exceptional effort.

1.3: I emphatically disagree we are at ‘infohazard saturation’ where the situation re. Infohazards ‘can’t get any worse’. I also find it unfathomable ever being confident enough in this claim to base strategy upon its assumption (cf. eukaryote’s comment).

1.4: There are some benefits to getting out ‘in front’ of more reckless disclosure by someone else. Yet in cases where one wouldn’t want to disclose it oneself, delaying the downsides of wide disclosure as long as possible seems usually more important, and so rules against bringing this to an end by disclosing yourself save in (rare) cases one knows disclosure is imminent rather than merely possible.

2: I don’t think there’s a neat distinction between ‘technical dangerous information’ and ‘broader ideas about possible risks’, with the latter being generally safe to publicise and discuss.

2.1: It seems easy to imagine cases where the general idea comprises most of the danger. The conceptual step to a ‘key insight’ of how something could be dangerously misused ‘in principle’ might be much harder to make than subsequent steps from this insight to realising this danger ‘in practice’. In such cases the insight is the key bottleneck for bad actors traversing the risk pipeline, and so comprises a major information hazard.

2.2: For similar reasons, highlighting a neglected-by-public-discussion part of the risk landscape where one suspects information hazards lie has a considerable downside, as increased attention could prompt investigation which brings these currently dormant hazards to light.

3: Even if I take the downside risks as weightier than you, one still needs to weigh these against the benefits. I take the benefit of ‘general (or public) disclosure’ to have little marginal benefit above more limited disclosure targeted to key stakeholders. As the latter approach greatly reduces the downside risks, this is usually the better strategy by the lights of cost/benefit. At least trying targeted disclosure first seems a robustly better strategy than skipping straight to public discussion (cf.).

3.1: In bio (and I think elsewhere) the set of people who are relevant setting strategy and otherwise contributing to reducing a given risk is usually small and known (e.g. particular academics, parts of the government, civil society, and so on). A particular scientist unwittingly performing research with misuse potential might need to know the risks of their work (likewise some relevant policy and security stakeholders), but the added upside to illustrating these risks in the scientific literature is limited (and the added downsides much greater). The upside of discussing them in the popular/generalist literature (including EA literature not narrowly targeted at those working on biorisk) is limited still further.

3.2: Information also informs decisions around how to weigh causes relative to one another. Yet less-hazardous information (e.g. the basic motivation given here or here, and you could throw in social epistemic steers from the prevailing views of EA ‘cognoscenti’) is sufficient for most decisions and decision-makers. The cases where this nonetheless might be ‘worth it’ (e.g. you are a decision maker allocating a large pool of human or monetary capital between cause areas) are few and so targeted disclosure (similar to 3.1 above) looks better.

3.3: Beyond the direct cost of potentially giving bad actors good ideas, the benefits of more public discussion may not be very high. There are many ways public discussion could be counter-productive (e.g. alarmism, ill-advised remarks poisoning our relationship with scientific groups, etc.). I’d suggest the examples of cryonics, AI safety, GMOs and other lowlights of public communication of policy and science are relevant cautionary examples.

4: I also want to supply other more general considerations which point towards a very high degree caution:

4.1: In addition to the considerations around the unilateralist’s curse offered by Brian Wang (I have written a bit about this in the context of biotechnology here) there is also an asymmetry in the sense that it is much easier to disclose previously-secret information than make previously-disclosed information secret. The irreversibility of disclosure warrants further caution in cases of uncertainty like this.

4.2: I take the examples of analogous fields to also support great caution. As you note, there is a norm in computer security of ‘don’t publicise a vulnerability until there’s a fix in place’, and initially informing a responsible party to give them the opportunity to to do this pre-publication. Applied to bio, this suggests targeted disclosure to those best placed to mitigate the information hazard, rather than public discussion in the hopes of prompting a fix to be produced. (Not to mention a ‘fix’ in this area might prove much more challenging than pushing a software update.)

4.3: More distantly, adversarial work (e.g. red-teaming exercises) is usually done by professionals, with a concrete decision-relevant objective in mind, with exceptional care paid to operational security, and their results are seldom made publicly available. This is for exercises which generate information hazards for a particular group or organisation - similar or greater caution should apply to exercises that one anticipates could generate information hazardous for everyone.

4.4: Even more distantly, norms of intellectual openness are used more in some areas, and much less in others (compare the research performed in academia to security services). In areas like bio, the fact that a significant proportion of the risk arises from deliberate misuse by malicious actors means security services seem to provide the closer analogy, and ‘public/open discussion’ is seldom found desirable in these contexts.

5: In my work, I try to approach potentially hazardous areas as obliquely as possible, more along the lines of general considerations of the risk landscape or from the perspective of safety-enhancing technologies and countermeasures. I do basically no ‘red-teamy’ types of research (e.g. brainstorm the nastiest things I can think of, figure out the ‘best’ ways of defeating existing protections, etc.)

(Concretely, this would comprise asking questions like, “How are disease surveillance systems forecast to improve over the medium term, and are there any robustly beneficial characteristics for preventing high-consequence events that can be pushed for?” or “Are there relevant limits which give insight to whether surveillance will be a key plank of the ‘next-gen biosecurity’ portfolio?”, and not things like, “What are the most effective approaches to make pathogen X maximally damaging yet minimally detectable?”)

I expect a non-professional doing more red-teamy work would generate less upside (e.g. less well networked to people who may be in a position to mitigate vulnerabilities they discover, less likely to unwittingly duplicate work) and more downside (e.g. less experience with trying to manage info-hazards well) than I. Given I think this work is usually a bad idea for me to do, I think it’s definitely a bad idea for non-professionals to try.

I therefore hope people working independently on this topic approach ‘object level’ work here with similar aversion to more ‘red-teamy’ stuff, or instead focus on improving their capital by gaining credentials/experience/etc. (this has other benefits: a lot of the best levers in biorisk are working with/alongside existing stakeholders rather than striking out on one’s own, and it’s hard to get a role without (e.g.) graduate training in a relevant field). I hope to produce a list of self-contained projects to help direct laudable ‘EA energy’ to the best ends.

View more: Next