Comment author: RobBensinger 31 October 2017 03:38:42PM *  1 point [-]

I don't think we should describe all instances of deference to any authority, all uses of the outside view, etc. as "modesty". (I don't know whether you're doing that here; I just want to be clear that this at least isn't what the "modesty" debate has traditionally been about.)

The question is what happens when you criticize it and don't get a better explanation. What should you do? Strongly adopt a partial solution to the problem, continue to look for other solutions or trust the specialists to figure it out?

I don't think there's any general answer to this. The right answer depends on the strength of the object-level arguments; on how much reason you have to think you've understood and gleaned the right take-aways from those arguments; on your model of the physics community and other relevant communities; on the expected information value of looking into the issue more; on how costly it is to seek different kinds of further evidence; etc.

I'm curious what you think about partial non-reality of wavefunctions (as described by the AncientGeek here and seeming to correspond to the QIT interpretation on the wiki page of interpretations, which fits with probabilities being in the mind ).

In the context of the measurement problem: If the idea is that we may be able to explain the Born rule by revising our understanding of what the QM formalism corresponds to in reality (e.g., by saying that some hidden-variables theory is true and therefore the wave function may not be the whole story, may not be the kind of thing we'd naively think it is, etc.), then I'd be interested to hear more details. If the idea is that there are ways to talk about the experimental data without committing ourselves to a claim about why the Born rule holds, then I agree with that, though it obviously doesn't answer the question of why the Born rule holds. If the idea is that there are no facts of the matter outside of observers' data, then I feel comfortable dismissing that view even if a non-negligible number of physicists turn out to endorse it.

I also feel comfortable having lower probability in the existence of God than the average physicist does; and "physicists are the wrong kind of authority to defer to about God" isn't the reasoning I go through to reach that conclusion.

Comment author: WillPearson 31 October 2017 04:41:30PM *  0 points [-]

In the context of the measurement problem: If the idea is that we may be able to explain the Born rule by revising our understanding of what the QM formalism corresponds to in reality (e.g., by saying that some hidden-variables theory is true and therefore the wave function may not be the whole story, may not be the kind of thing we'd naively think it is, etc.), then I'd be interested to hear more details.

Heh, I'm in danger of getting nerd sniped into physics land, which would be a multiyear journey. I'm found myself trying to figure out whether the stories in this paper count as real macroscopic worlds or not (or hidden variables). And then I tried to figure out whether it matters or not.

I'm going to bow out here. I mainly wanted to point out that there are more possibilities than just believe in Copenhagen and believe in Everett.

Comment author: RobBensinger 31 October 2017 01:16:17PM *  1 point [-]

He endorses "many worlds" in the sense that he thinks the wave-function formalism corresponds to something real and mind-independent, and that this wave function evolves over time to yield many different macroscopic states like our "classical" world. I've heard this family of views called "(QM) multiverse" views to distinguish this weak claim from the much stronger claim that, e.g., decoherence on its own resolves the whole question of where the Born rule comes from.

From a 2008 post in the MWI sequence:

One serious mystery of decoherence is where the Born probabilities come from, or even what they are probabilities of.

[... W]hat does the integral over squared moduli have to do with anything? On a straight reading of the data, you would always find yourself in both blobs, every time. How can you find yourself in one blob with greater probability? What are the Born probabilities, probabilities of? Here's the map—where's the territory?

I don't know. It's an open problem. [...]

This problem is even worse than it looks, because the squared-modulus business is the only non-linear rule in all of quantum mechanics. Everything else—everything else—obeys the linear rule that the evolution of amplitude distribution A, plus the evolution of the amplitude distribution B, equals the evolution of the amplitude distribution A + B.

Comment author: WillPearson 31 October 2017 02:49:53PM 0 points [-]

Ah, it has been a while since I engaged with this stuff. That makes sense. I think we are talking past each other a bit though. I've adopted a moderately modest approach to QM since I've not touched it in a bit and I expect the debate has moved on a bit.

We started from a criticism of a particular position (the copenhagen interpretation) which I think is a fair thing to do for the modest and immodest. The modest person might misunderstand a position and be able to update themselves better if they criticize it and get a better explanation.

The question is what happens when you criticize it and don't get a better explanation. What should you do? Strongly adopt a partial solution to the problem, continue to look for other solutions or trust the specialists to figure it out?

I'm curious what you think about partial non-reality of wavefunctions (as described by the AncientGeek here and seeming to correspond to the QIT interpretation on the wiki page of interpretations, which fits with probabilities being in the mind ).

Comment author: RobBensinger 31 October 2017 12:50:21PM *  0 points [-]

Yeah, I'm not making claims about what modest positions think about this issue. I'm also not endorsing a particular solution to the question of where the Born rule comes from (and Eliezer hasn't endorsed any solution either, to my knowledge). I'm making two claims:

  1. QM non-realism and objective collapse aren't true.
  2. As a performative corollary, arguments about QM non-realism and objective collapse are tractable, even for non-specialists; it's possible for non-specialists to reach fairly confident conclusions about those particular propositions.

I don't think either of those claims should be immediately obvious to non-specialists who completely reject "try to ignore object-level arguments"-style modesty, but who haven't looked much into this question. Non-modest people should initially assign at least moderate probability to both 1 and 2 being false, though I'm claiming it doesn't take an inordinate amount of investigation or background knowledge to determine that they're true.

(Edit re Will's question below: In the QM sequence, what Eliezer means by "many worlds" is only that the wave-function formalism corresponds to something real in the external world, and that this wave function evolves over time to yield many different macroscopic states like our "classical" world. I've heard this family of views called "(QM) multiverse" views to distinguish this weak claim from the much stronger claim that, e.g., decoherence on its own resolves the whole question of where the Born rule comes from.)

Comment author: WillPearson 31 October 2017 12:59:54PM 0 points [-]

and Eliezer hasn't endorsed any solution either, to my knowledge)

Huh, he seemed fairly confident about endorsing MWI in his sequence here

Comment author: RobBensinger 31 October 2017 12:41:35AM *  1 point [-]

Going back to your list:

nutrition, animal consciousness, philosophical zombies, population ethics, and quantum mechanics

I haven't looked much at the nutrition or population ethics discussions, though I understand Eliezer mistakenly endorsed Gary Taubes' theories in the past. If anyone has links, I'd be interested to read more.

AFAIK Eliezer hasn't published why he holds his views about animal consciousness, and I don't know what he's thinking there. I don't have a strong view on whether he's right (or whether he's overconfident).

Concerning zombies: I think Eliezer is correct that the zombie argument can't provide any evidence for the claim that we instantiate mental properties that don't logically supervene on the physical world. Updating on factual evidence is a special case of a causal relationship, and if instantiating some property P is causally impacting our physical brain states and behaviors, then P supervenes on the physical.

I'm happy to talk more about this, and I think questions like this are really relevant to evaluating the track record of anti-modesty positions, so this seems like as good a place as any for discussion. I'm also happy to talk more about meta questions related to this issue, like, "If the argument above is correct, why hasn't it convinced all philosophers of mind?" I don't have super confident views on that question, but there are various obvious possibilities that come to mind.

Concerning QM: I think Eliezer's correct that Copenhagen-associated views like "objective collapse" and "quantum non-realism" are wrong, and that the traditional arguments for these views are variously confused or mistaken, often due to misunderstandings of principles like Ockham's razor. I'm happy to talk more about this too; I think the object-level discussions are important here.

Comment author: WillPearson 31 October 2017 08:51:24AM 1 point [-]

Concerning QM: I think Eliezer's correct that Copenhagen-associated views like "objective collapse" and "quantum non-realism" are wrong, and that the traditional arguments for these views are variously confused or mistaken, often due to misunderstandings of principles like Ockham's razor. I'm happy to talk more about this too; I think the object-level discussions are important here.

I don't think the modest view (at least as presented by Gregory) would believe in any of the particular interpretations as there is significant debate still.

The informed modest person would go, "You have object reasons to dislike these interpretations. Other people have object reasons to dislike your interpretations. Call me when you have hashed it out or done an experiments to pick a side". They would go on an do QM without worrying too much about what it all means.

Comment author: kbog  (EA Profile) 30 October 2017 07:39:53AM *  2 points [-]

The difference in many object level claims, like the probability that there will be an intelligence explosion and so on, is not very much between EAs and AI researchers. This survey demonstrated it: https://arxiv.org/abs/1705.08807

AI researchers are just more likely to have an attitude that anything less than ~10% likely to occur should be ignored, or existential risks are not orders of magnitude more important than other things, or similar kinds of judgement calls.

The one major technical issue where EAs might be systematically different from AI researchers would be the validity of current research in addressing the problem.

Comment author: WillPearson 30 October 2017 09:23:34AM 0 points [-]

Is there any data on how likely EAs think that explosive progress after HLMI will happen? I would have thought it more than 10%?

I would also have expected more debate about explosive progress, more than just the recent Hanson-Yudkowski flair up, if there was as much doubt in the community as that survey suggests.

Comment author: WillPearson 29 October 2017 08:27:30PM 1 point [-]

Another reason to not have too much modesty within society is that it makes expert opinion very appealing to subvert. I wrote a bit about that here.

Note that I don't think that my views about the things that I believe subverted/unmoored would be necessarily correct, but that the first order of business would be to try and build a set of experts with better incentives.

Comment author: WillPearson 27 October 2017 07:53:44PM 4 points [-]

Since I've not seen it mentioned here, unconferences seem like a inclusive type of event as described above. I'm not sure how EAG compare.

Comment author: kbog  (EA Profile) 27 October 2017 08:30:54AM *  10 points [-]

On one hand, it is technically better to change things if that motivates people to become involved in the community. But on the other hand, if someone is ethically motivated to do the right thing, and they find that EA is plausibly in the right lane for this purpose, then you would expect them to be involved in productive activities regardless of whether their personality types are similar or not. That's not any more of a sacrifice than we make in other sorts of things: I recruited for a finance career, despite the fact that the personality types and culture are antithetical to my own, I am in the military, despite the fact that the personality types and culture are antithetical to my own, I donate money, despite the fact that I would have more personal happiness than if I didn't, and so on.

The kinds of people who would be doing EA things if and only if we were a little bit more appealing are the kinds of people who won't take the ethically optimal career route, because the ethically optimal career route is not likely to be optimally appealing, and that is something that we can't change. If someone can only be brought into the movement by catering, they're not going to suddenly change and automatically act as forcefully and positively as the rest of us; they'll still need to be catered to for additional steps on and on into the future. You can see examples of this with activist groups on college campuses, where administrations make costly room and concessions to activists and yet continually face additional demands and disruption.

This, of course, is not a statement that all people who are motivated in such a way are like this nor is it a statement that such people would not bring substantial positive value to the movement on balance. And I'm not making any judgements about character, just observing the ethically relevant facets of human behavior. The point is that the positive impact of such expansion is more limited than you would naively think and therefore warrants less resource allocation than expansions which would attract similar numbers of other types of people.

I'm also skeptical that concessions like this actually do much. If you want an example, look at the early criticisms of EA, where people talked about how we 'neglect systemic change'. Over and over and over again, we explained that you can do systemic change in EA, please come and do systemic change with us, we're compatible, and so on. There were a couple articles saying "Can EA change the world? It already has" and "We love systemic change." Now there have even been two peer-reviewed published philosophy papers driving this point home, by Joshua Kissel and Brian Berkey. Then 80,000 Hours said "hold up guys, only 15% of people should earn to give, we don't want to be misunderstood." And on and on and on. This was all a fine response, of course.

But has it actually changed the behavior of the people who raised those critiques? Have any of the critics recanted and said "okay, I'll join EA now, and develop some EA-based systemic change"? Are EA's ranks swelling with a new crop of excited leftist revolutionaries? No! This kind of movement growth is nowhere to be found! When was the last time you heard ANY Effective Altruist argue that poverty alleviation is neutral or harmful because it reduces the probability that capitalism will be superseded? Never! Yes, there are a few EA leftists whose main priority is to systemically reform capitalism, but not significantly more than there were in the first place, and they are a tiny group in comparison to the liberals, the conservatives, the vegans, the x-risk people, and so on. As far as I can tell, the impact of all these articles and comments in bringing leftists into active participation with EA was totally nonexistent.

So while there is a difference between the demographics under consideration here and the progressive-leftist political group in this example, don't expect to work any wonders from piling on disclaimers and bureaucracy and openness and other things. You can look outside EA for other examples. The US military does many of the things that you suggest. SHARP/EO representatives, briefings, policies, and on and on and on. But that didn't substantially change our demographics, culture, sexual assault rates, or anything of the sort. It didn't stop PVT Manning from going sufficiently crazy and disassociated with military culture on the basis of gender dysphoria to decide to harm the organization.

So instead of engaging in the knee-jerk logic of "there's a diversity problem - let's start doing pro-diversity things!" we should be focusing on reason and evidence so that we only spend time and money on solutions where we have a significant expectation of something meaningful being accomplished.

Of course you are pretty clear in your post that, yes, things should be evidence based, there is weak-but-mounting evidence supporting interventions like this, and so on. But I want to emphasize a higher bar of skepticism than what people are likely to take away from your post, especially since the opportunity cost for EA resources is much higher than it is in other contexts.

E.g.:

● CEA and EAF could both, or jointly, hire a Diversity & Inclusion Officer

● All organizations should hire communications staff who are versed in inclusionary communications practices. Alternatively, the Diversity & Inclusion Officer could train them.

These are both moderately costly additions to bureaucracy, and I don't really see what their value is. I'm aware that lots of organizations put emphasis on these types of things, but what are the exact outputs and impacts?

● Adopt and enforce a clear policy — as organizations and individuals — for dealing seriously and fully with illegal actions like sexual harassment and explicit discrimination or discrimination revealed by HR or legal counsel.

While I'm not saying this is a bad thing, I don't see what the motivation is. If the problem is gender bias, or ineffective marketing, or narrow appeal, or things of that nature, then they should be dealt with appropriately. What we should not do is lump together every gender-related problem as part of a monolithic reason to implement all gender-related solutions. It's simply less efficient.

For instance, it should be outright unacceptable for someone to say that women do not contribute to society and are leeches if they don’t offer men sex. This actually happened, recently,

So I was the one who, more than anyone else, told that person that they were an idiot who ought to shut up. But it wasn't just because they were sexist. I think one of the underlying problems here is not just sexism but people who just don't care enough about the EA movement itself. I would expect any sexist or racist to at least be decent and intelligent enough to know that there are some pots you don't stir, purely as a practical means of maintaining a productive movement. I see lots of people talk about making EA more appealing or more diverse, which is fine, but one of the underlying causes of all of these issues, both when it comes to sexists picking fights and when it comes to members of marginalized groups refraining from contributing, is that people care more about things like lifestyle, community and tribal affiliation than they do about sitting down to do productive ethical work. And that's a super hard thing to change, but it warrants some attention. We can't just sit around and rely on a shaky combination of atypical saints on one hand and clever marketing on the other.

Edit: also, I have to add that you are being a little bit uncharitable to the person who said that. They said something bad, but not quite how you describe it. I'm not saying this because I care about them, but just because it's bad if people read this and think "omg, an effective altruist said this! Look how sexist EAs are!" and it gets repeated and spread as a false rumor.

Comment author: WillPearson 27 October 2017 07:44:03PM *  4 points [-]

Yes, there are a few EA leftists whose main priority is to systemically reform capitalism, but not significantly more than there were in the first place, and they are a tiny group in comparison to the liberals, the conservatives, the vegans, the x-risk people, and so on. As far as I can tell, the impact of all these articles and comments in bringing leftists into active participation with EA was totally nonexistent.

I'm not sure I count or not. My work on autonomy can be seen as investigating systemic change. I've been to a couple of meetups and hung around this forum a bit and I can tell you why the community is not very enticing or inviting from my point of view, if you are interested.

Edit to add:

I can only talk about EA London which I went to a couple of the meetups. To preface things I had generally good interactions with people they were nice and we chatted a bit about non-systemic EA interests (which I am also interested in). There was lots of conversation and not too much holding-forth.

I was mainly trying to find people interested in discussing AI/future things as any systemic change has to take this into consideration and there is lots of uncertainty. I was asked what I was interested in by organisers and asked if anyone knew people primarily interested in AI, and I didn't get any useful responses. At the time I didn't know enough about EA to ask about systemic change (and wasn't as clear on what I exactly wanted).

This slightly rambling point is to illustrate that it is hard to connect with people on niche topics (which AI seems to be in London). There probably needs to be a critical mass of people joining at once for a locality to support a topic.

I've joined a London EA facebook group focused on the future so I have my hopes.

That is pretty benign, a problem but not a large one. More could be done, but more could always be done.

The second, which I think might be more exclusionary, is EAG. I applied for tickets and to volunteer but I've heard nothing so far. I'm unsure why there is even selection on tickets.

I suspect I don't look like lots of EAs on an application form: I don't earn to give, but have taken a pay cut to work part time on my project, which I hope will help everyone in the long run. I may not have quite the same chipper enthusiasm.

I suspect other people interested in systemic change will look similarly different from lots of EAs, and the curation of EAG might be biased against them. If it is, then I probably have not lost out much by not going!

I mainly wrote this comment to try and give some possible reasons for the lack of a significant group interested in systemic change (despite articles/comments to the contrary). I'm not expecting EA to change, you can't be a group for everyone and you do interesting and good things. But it is good to know some of the potential reasons why things are how they are.

Edit2: I got a polite email from Julia Wise telling me that the reason I didn't get an invite was because London was a smaller event and that people were selected on the basis of "those who will benefit most from attending EA Global London." It would be nicer if these things were a little more transparent, e.g. you are applicant #X we can only accept #Y applicants, to give you a better idea of the chances. From my own perspective for the people that are interested in current niche EA topics it is important to be able to potentially meet other people from around the world interested in their topics. EAG might not be the place for that though.

Comment author: WillPearson 23 October 2017 09:02:34PM 0 points [-]

As a data point for how useful ITN is for trying to think about system change I shall talk about my attempt.

My attempt is here.

So the intervention was to try and start a movement of people who shared technological development with each other to help them live (so food/construction/computing), with the goal of creating autonomous communities (capable of space faring or living on other planets would be great, but is not the focus). The main difference between it and the normal open source movement is that it would focus on intelligence augmentation to start with for economic reasons.

The main problems I came across were arguing it would have positive long term impact due to existential risk and war risks compared to singletons (I think it would, because I am skeptical of the stability of singletons).

It will take a long time to form a convincing argument about this one way or another, and probably a significant amount of more expertise in economics, sociology and game theory as well.

If a fully justified ITN report is needed before EAs start to be interested in it, then it is likely that it and other ideas like it will be neglected. Some ideas will need significant resources put into determining their tractability and also the sign of their impact.

In response to Open Thread #39
Comment author: WillPearson 23 October 2017 08:28:06PM 2 points [-]

I've posted about an approach to AGI estimation.

I would love to find collaborators on this kind of thing. In London would be great.

View more: Next