Comment author: Tobias_Baumann 20 July 2017 08:40:43AM *  9 points [-]

Thanks for writing this up! I agree that this is a relevant argument, even though many steps of the argument are (as you say yourself) not airtight. For example, consciousness or suffering may be related to learning, in which case point 3) is much less clear.

Also, the future may contain vastly larger populations (e.g. because of space colonization), which, all else being equal, may imply (vastly) more suffering. Even if your argument is valid and the fraction of suffering decreases, it's not clear whether the absolute amount will be higher or lower (as you claim in 7.).

Finally, I would argue we should focus on the bad scenarios anyway – given sufficient uncertainty – because there's not much to do if the future will "automatically" be good. If s-risks are likely, my actions matter much more.

(This is from a suffering-focused perspective. Other value systems may arrive at different conclusions.)

Comment author: Nekoinentr 21 July 2017 12:53:49AM 1 point [-]

The Foundational Research Institute site in the links above seems to have a wealth of writing about the far future!

Comment author: Nekoinentr 20 July 2017 02:47:31PM 1 point [-]

On premise 1, a related but stronger claim is that humans tend to shape the universe to their values much more strongly than do blind natural forces. This allows for a simpler but weaker argument than yours: it follows that, should humans survive, the universe is likely to be better (according to those values) than it otherwise would be.

Comment author: Brian_Tomasik 11 July 2017 10:23:20PM 1 point [-]

IMO, the philosophers who accept this understanding are the so-called "type-A physicalists" in Chalmers's taxonomy. Here's a list of some such people, but they're in the minority. Chalmers, Block, Searle, and most other philosophers of mind aren't type-A physicalists.

Comment author: Nekoinentr 20 July 2017 02:43:23PM 0 points [-]

IMO, the philosophers who accept this understanding are the so-called "type-A physicalists" in Chalmers's taxonomy.

I'm not wholly sure I understand the connection between this and denying that consciousness is a natural kind. The best I can do (and perhaps you or thebestwecan can do better? ;-) ) is:

"If consciousness is a natural kind, then the existence of that natural kind is a separate fact from the existence of such-and-such a physical brain state (and vica versa)"

Comment author: ThomasSittler 04 July 2017 01:40:52PM *  1 point [-]

Other changes you might consider:

(1) Creating more than one category for posts (e.g. research, outreach, announcements, chat).

A first advantage is you can group by topic. The second big advantage I see from this is that different norms can develop for different categories. The threshold for posting in the EA forum is currently (perceived to be) too high for drafts or off-the-cuff ideas. So these either don't happen or move to Facebook (where discoverability is much worse). On the other hand, the prestige of posting, and the quality of discussion, is too low, so the most influential and busy EAs may not see it as a good use of their time. A way to improve this may be to have a very strictly moderated high-prestige section, and a more loosely moderated low-prestige section.

(2) Allowing markdown in the post composer

Comment author: Nekoinentr 07 July 2017 11:50:14PM 0 points [-]

Another change:

(3) Tagging users so they get notifications. I tried tagging "Tee", who posted here about moving up to the Executive Director role at .impact, in my previous comment. But I couldn't find a character like @ that allowed me to do this.

(Is there a place to post feature suggestions like this?)

Comment author: SamDeere 04 July 2017 03:26:18AM *  3 points [-]

Short-medium term: some minor UI changes, to bring branding more into line with the rest of effectivealtruism.org

Longer term ideas (caveat — these are just at the thought bubble stage at the moment and it's not clear whether they'd be valuable changes):

  • I think there's appetite for a discussion space that's both content aggregation as well as original content. This might take the form of getting a more active subreddit (for example) happening, but plausibly this could be something specifically built-for-purpose that either integrates with or complements the existing forum.

  • We've thought about integrating logins between the webapp on EffectiveAltruism.org (what is currently just EA Funds) and the forum to avoid the need to manage multiple accounts when doing various EA things online

  • We've also thought a bit about integrating commenting systems so that discussion that happens on various EA blogs is mirrored on the forum (to avoid splitting discussions when cross-posting).

If there are things that you think would be useful (especially if you've been able to give this more thought than I have) that'd be great to know, with the caveat that we're pretty restricted by developer time on this, and the priority is ensuring ongoing maintenance of the existing infrastructure, rather than building out new features.

[eta spaces between dot points]

Comment author: Nekoinentr 07 July 2017 11:47:33PM 0 points [-]

I presume CEA tech staff will make the branding changes, but is the plan for them also to make the longer term changes, or would that continue to be the .impact community? I don't understand what roles CEA has taken on as of this announcement and what role .impact continues to have? It sounds from the first paragraph like .impact has decided to transition primary responsibility for forum maintenance and improvements to CEA, but the third last paragraph suggest otherwise - could someone from that community comment?

Comment author: Nekoinentr 07 July 2017 11:36:47PM 0 points [-]

One reason this is that, because there are donors with money on the sidelines, if the organisations were able to find someone with a good level of fit, they could fundraise enough money to pay for their salaries.

Can you (very roughly) quantify to what extent this is the case for EA organisations? (I imagine they will vary as to how donor-rich vs. potential-hire-rich they are, so some idea of the spread would be helpful.)

Comment author: Brian_Tomasik 30 June 2017 04:33:51AM *  0 points [-]

I agree that it tells us little about the moral questions, but understanding that consciousness is a contested concept rather than a natural kind is itself a significant leap forward in the debate. (Most philosophers haven't gotten that far.)

One thing that makes consciousness interesting is that there's such a wide spectrum of views, from some people thinking that among current entities on Earth, only humans have consciousness, to some people thinking that everything has consciousness.

Comment author: Nekoinentr 07 July 2017 11:26:43PM 1 point [-]

but understanding that consciousness is a contested concept rather than a natural kind is itself a significant leap forward in the debate. (Most philosophers haven't gotten that far.)

Who do and do not agree with that, then? You and thebestwecan clearly do. Do you know the opinions of prominent philosophers in the field? For instance David Chalmers, who sounds like he is amongst these(?)

Comment author: Brian_Tomasik 29 June 2017 06:16:34AM *  1 point [-]

A good example of what thebestwecan means by "objectivity" is the question "If a tree falls in a forest and no one is around to hear it, does it make a sound?" He and I would say there's no objective answer to this question because it depends what you mean by "sound". I think "Is X conscious?" is a tree-falls-in-a-forest kind of question.

When you talk about whether "consciousness is an actual property of the world", do you mean whether it's part of ontologic base reality?

Yeah, ontologically primitive, or at least so much of a natural kind, like the difference between gold atoms and potassium atoms, that people wouldn't really dispute the boundaries of the concept. (Admittedly, there might be edge cases where even what counts as a "gold atom" is up for debate.)

Comment author: Nekoinentr 29 June 2017 06:39:29AM 2 points [-]

The idea of a natural kind is helpful. The fact that people mean different things by "consciousness" seems unsurprising, as that's the case for any complex word that people have strong motives to apply (in this case because consciousness sounds valuable). It also tells us little about the moral questions we're considering here. Do you guys agree or am I missing something?

Comment author: thebestwecan 28 June 2017 09:30:13PM *  2 points [-]

I think Tomasik's essay is a good explanation of objectivity in this context. The most relevant brief section.

Type-B physicalists maintain that consciousness is an actual property of the world that we observe and that is not merely conceptually described by structural/functional processing, even though it turns out a posteriori to be identical to certain kinds of structures or functional behavior.

If you're Type A, then presumably you don't think there's this sort of "not merely conceptually described" consciousness. My concern then is that some of your writing seems to not read like Type A writing, e.g. in your top answer in this AMA, you write:

I'll focus on the common fruit fly for concreteness. Before I began this investigation, I probably would've given fruit fly consciousness very low probability (perhaps <5%), and virtually all of that probability mass would've been coming from a perspective of "I really don't see how fruit flies could be conscious, but smart people who have studied the issue far more than I have seem to think it's plausible, so I guess I should also think it's at least a little plausible." Now, having studied consciousness a fair bit, I have more specific ideas about how it might turn out to be the case that fruit flies are conscious, even if I think they're relatively low probabilitiy, and of course I retain some degree of "and maybe my ideas about consciousness are wrong, and fruit flies are conscious via mechanisms that I don't currently find at all plausible." As reported in section 4.2, my current probability that fruit flies are conscious (as loosely defined in section 2.3.1 is 10%.

Speaking of consciousness in this way seems to imply there is an objective definition, but as I speculated above, maybe you think this manner of speaking is still justified given a Type A view. I don't think there's a great alternative to this for Type A folks, but what Tomasik does is just frequently qualifies that when he says something like 5% consciousness for fruit flies, it's only a subjective judgment, not a probability estimate of an objective fact about the world (like whether fruit flies have, say, theory of mind).

I do worry that this is a bad thing for advocating for small/simple-minded animals, given it makes people think "Oh, I can just assign 0% to fruit flies!" but I currently favor intellectual honesty/straightforwardness. I think the world would probably be a better place if Type B physicalism were true.

Makes sense about the triviality objection, and I appreciate that a lot of your writing like that paragraph does sound like Type A writing :)

Comment author: Nekoinentr 29 June 2017 05:16:09AM 2 points [-]

I don't think I understand what you mean by consciousness being objective. When you mention "what processes, materials, etc. we subjectively choose to use as the criteria for consciousness", this sounds to me as if you're talking about people having different definitions of consciousness, especially if the criteria are meant as definitive rather than indicative. However presumably in many cases whether the criteria are present will be an objective question.

When you talk about whether "consciousness is an actual property of the world", do you mean whether it's part of ontologic base reality?

Comment author: Nekoinentr 29 June 2017 03:20:19AM 0 points [-]

I wouldn't have thought that hits-based giving should be a general strategy, as it's one highly specific way of having an impact. I can understand 80,000 hours developing it as a way to understand their own impact; it fits when you're giving in-depth advice to a few individuals on their whole careers, but that's an atypical case.

View more: Next