Comment author: lukeprog 28 June 2017 11:35:00PM 2 points [-]

My hope was that the Type A-ness / subjectivity of the concept of "consciousness" I'm using would be clear from section 2.3.1 and 2.3.2, and then I can write paragraphs like the one above about fruit fly consciousness, which refers back to the subjective notion of consciousness introduced in section 2.3.

But really, I just find it very cumbersome to write in detail and at length about consciousness in a way that allows every sentence containing consciousness words to clearly be subjective / Type A-style consciousness. It's similar to what I say in the report about fuzziness:

given that we currently lack such a detailed decomposition of “consciousness,” I reluctantly organize this report around the notion of “consciousness,” and I write about “which beings are conscious” and “which cognitive processes are conscious” and “when such-and-such cognitive processing becomes conscious,” while pleading with the reader to remember that I think the line between what is and isn’t “conscious” is extremely “fuzzy” (and as a consequence I also reject any clear-cut “Cartesian theater.”)

But then, throughout the report, I make liberal use of "normal" phrases about consciousness such as what's conscious vs. not-conscious, "becoming" conscious or not conscious, what's "in" consciousness or not, etc. It's just really cumbersome to write in any other way.

Another point is that, well, I'm not just a subjectivist / Type A theorist about consciousness, but about nearly everything. So why shouldn't we feel fine using more "normal" sentence structures to talk about consciousness, if we feel fine talking about "living things" and "mountains" and "sorting algorithms" and so on that way? I don't have any trouble talking about the likelihood that there's a mountain in such-and-such city, even though I think "mountain" is a layer of interpretation we cast upon the world.

Comment author: thebestwecan 29 June 2017 03:26:19PM *  2 points [-]

That pragmatic approach makes sense and helps me understand your view better. Thanks! I do feel like the consequences of suggesting objectivism for consciousness are more significant than for "living things," "mountains," and even terms that are themselves very important like "factory farming."

Consequences being things like (i) whether we get wrapped up in the ineffability/hard problem/etc. such that we get distracted from the key question (for subjectivists) of "What are the mental things we care about, and which beings have those?" and (ii) in the particular case of small minds (e.g. insects, simple reinforcement learners), whether we try to figure out their mental lives based on objectivist speculation (which, for subjectivists, is misguided) or force ourselves to decide what the mental things we care about are, and then thoughtfully evaluate small minds on that basis. I think evaluating small minds is where the objective/subjective difference really starts to matter.

Also, to a less extent, (iii) how much we listen to "expert" opinion outside of just people who are very familiar with the mental lives of the being in question, and (iv) unknown unknowns and keeping a norm of intellectual honesty, which seems to apply more to discussions of consciousness than of mountains/etc.

Comment author: lukeprog 28 June 2017 06:36:23PM 4 points [-]

I'm not sure what you mean by "objective definition" or "objectively correct answer," but I don't think I think of consciousness as being "objective" in your sense of the term.

The final question, for me, is "What should I care about?" I elaborate my "idealized" process for answering this question in section 6.1.2. Right now, my leading guess for what I'd conclude upon going through some approximation of that idealized process is that I'd care about beings with valenced conscious experience, albeit with different moral weights depending on a variety of other factors (early speculations in Appendix Z7).

But of course, I don't know quite what sense of "valenced conscious experience" I'd end up caring about upon undergoing my idealized process for making moral judgments, and the best I can do at this point is something like the definition by example (at least for the "consciousness" part) that I begin to elaborate in section 2.3.1.

Re: Type A physicalism, aka Type A materialism. As mentioned in section 2.3.2, I do think my current view is best thought of as "'type A materialism,' or perhaps toward the varieties of 'type Q' or 'type C' materialism that threaten to collapse into 'type A' materialism anyway…" (see the footnote after this phrase for explanations). One longer article that might help clarify how I think about "type A materialism" w.r.t. consciousness or other things is Mixed Reference: The Great Reductionist Project and its dependencies.

That said, I do think the "triviality" objection is a serious one (Ctrl+F the report for "triviality objection to functionalism"), and I haven't studied the issue enough to have a preferred answer for it, nor am I confident there will ever be a satisfying answer to it — at least, for the purposes of figuring out what I should care about. Brian wrote a helpful explainer on some of these issues: How to Interpret a Physical System as a Mind. I endorse many of the points he argues for there, though he and I end up with somewhat different intuitions about what we morally care about, as discussed in the notes from our conversation.

Comment author: thebestwecan 28 June 2017 09:30:13PM *  2 points [-]

I think Tomasik's essay is a good explanation of objectivity in this context. The most relevant brief section.

Type-B physicalists maintain that consciousness is an actual property of the world that we observe and that is not merely conceptually described by structural/functional processing, even though it turns out a posteriori to be identical to certain kinds of structures or functional behavior.

If you're Type A, then presumably you don't think there's this sort of "not merely conceptually described" consciousness. My concern then is that some of your writing seems to not read like Type A writing, e.g. in your top answer in this AMA, you write:

I'll focus on the common fruit fly for concreteness. Before I began this investigation, I probably would've given fruit fly consciousness very low probability (perhaps <5%), and virtually all of that probability mass would've been coming from a perspective of "I really don't see how fruit flies could be conscious, but smart people who have studied the issue far more than I have seem to think it's plausible, so I guess I should also think it's at least a little plausible." Now, having studied consciousness a fair bit, I have more specific ideas about how it might turn out to be the case that fruit flies are conscious, even if I think they're relatively low probabilitiy, and of course I retain some degree of "and maybe my ideas about consciousness are wrong, and fruit flies are conscious via mechanisms that I don't currently find at all plausible." As reported in section 4.2, my current probability that fruit flies are conscious (as loosely defined in section 2.3.1 is 10%.

Speaking of consciousness in this way seems to imply there is an objective definition, but as I speculated above, maybe you think this manner of speaking is still justified given a Type A view. I don't think there's a great alternative to this for Type A folks, but what Tomasik does is just frequently qualifies that when he says something like 5% consciousness for fruit flies, it's only a subjective judgment, not a probability estimate of an objective fact about the world (like whether fruit flies have, say, theory of mind).

I do worry that this is a bad thing for advocating for small/simple-minded animals, given it makes people think "Oh, I can just assign 0% to fruit flies!" but I currently favor intellectual honesty/straightforwardness. I think the world would probably be a better place if Type B physicalism were true.

Makes sense about the triviality objection, and I appreciate that a lot of your writing like that paragraph does sound like Type A writing :)

Comment author: thebestwecan 28 June 2017 05:24:29PM *  4 points [-]

Thanks for doing this AMA. I'm curious for more information on your views about the objectivity of consciousness, e.g. Is there an objectively correct answer to the question "Is an insect conscious?" or does it just depend on what processes, materials, etc. we subjectively choose to use as the criteria for consciousness?

The Open Phil conversation notes with Brian Tomasik say:

Luke isn’t certain he endorses Type A physicalism as defined in that article, but he thinks his views are much closer to “Type A” physicalism than to “Type B” physicalism

(For readers, roughly speaking, Type A physicalism is the view that consciousness lacks an objective definition. Tomasik's well-known analogy is that there's no objective definition of a table, e.g. if you eat on a rock, is it a table? I would add that even if there's something we can objectively point to as our own consciousness (e.g. the common feature of the smell of a mushroom, the emotion of joy, seeing the color red), that doesn't give you an objective definition in the same way knowing one piece of wood on four legs is a table, or even having several examples, doesn't give you an objective definition of a table.)

However, in the report, you write as though there is an objective definition (e.g. in the "Consciousness, innocently defined" section), and I feel most readers of the report will get that impression, e.g. that there's an objective answer as to whether insects are conscious.

Could you elaborate on your view here and the reasoning behind it? Perhaps you do lean towards Type A (no objective definition), but think it's still useful to use common sense rhetoric that treats it as objective, and you don't think it's that harmful if people incorrectly lean towards Type B. Or you lean towards Type A, but think there's still enough likelihood of Type B that you focus on questions like "If Type B is true, then is an insect conscious?" and would just shorthand this as "Is an insect conscious?" because e.g. if Type A is true, then consciousness research is not that useful in your view.

Comment author: Andy_Schultz 26 June 2017 02:37:58PM 0 points [-]

In the linked Summary of Evidence document, in the section "Farmed animal vs. wild animals vs. general antispeciesism focus", some of the rankings in the grid do not match the explanations below. For example, under Scale, the grid has Farmed animal focus as rank 1, but the explanation below has General antispeciesism as rank 1.

Comment author: thebestwecan 26 June 2017 08:49:45PM *  0 points [-]

Thanks, Andy. That table had the values of the previous table for some reason. We updated the page.

Comment author: thebestwecan 14 June 2017 12:33:58PM 5 points [-]

I'd like to ask the people who downvoted this post to share their concerns in comments if possible. I know animal content tends to get downvoted by some people on the EA Forum, so this might just be another instance of that, rather than for more specific reasons.

Comment author: thebestwecan 14 June 2017 02:00:54AM *  1 point [-]

I (Jacy) was asked a good question privately that I wanted to log my answer to here, about how our RCT approach compares with that of academic social science RCTs, which I also discussed some in my response to Jay.

While there are many features of academic social science research we hope to emulate, e.g. peer review, I think academia also has a lot of issues that we want to avoid. For example, some good science practices, e.g. preregistration, are still uncommon in academia and there are strong incentives other than scientific accuracy, e.g. publish or perish, that we hope to minimize. I'd venture a speculative guess that the RCTs ran by nonprofit researchers in the EA community, e.g. the Mercy For Animals online ads RCT, are higher-quality than most academic RCTs. The most recurrent issue in EA RCTs is low sample size, which seems like more of a funding issue than a skillset/approach issue. (It could be a skillset/approach issue in some ways, e.g. if EA nonprofits should be running fewer RCTs so they can get the higher sample size on the same budget, which I tentatively agree with and think is the current trend.)

With our Research Network, we're definitely happy to support high-quality academic research. We'd also be happy to hire academics interested in switching to nonprofit research, though we worry that few would be willing to work for the relatively low salaries.

In terms of communicating our research, our lack of PhDs and academic appointments on staff has been at the top of our list of concerns. Unfortunately there's just not a good fix available. Ideally, once we're able to make our first hire, we'd find a PhD who's willing to work for a nonprofit EA salary, but that seems unlikely. We do already have PhDs/academics in our advisory/review network. I've also considered personally going back to school for a PhD, but everyone I've consulted with thinks this wouldn't be worth the time cost.

Comment author: JesseClifton 09 June 2017 05:53:20PM 0 points [-]

Have animal advocacy organizations expressed interest in using SI's findings to inform strategic decisions? To what extent will your choices of research questions be guided by the questions animal advocacy orgs say they're interested in?

Comment author: thebestwecan 13 June 2017 12:34:51PM *  1 point [-]

We've been in touch with most EAA orgs (ACE, OPP, ACE top/standout charities) and they have expressed interest. We haven't done many hard pitches so far like, "The research suggests X. We think you should change your tactics to reflect that, by shifting from Y to Z, unless you have evidence we're not aware of." We hope to do that in the future, but we are being cautious and waiting until we have a little more credibility and track record. We have communicated our findings in softer ways to people who seem to appreciate the uncertainty, e.g. "Well, our impression of this social movement is that it's evidence for Z tactics, but we haven't written a public report on that yet and it might change by the time we finish the case study."

I (Jacy) would guess that our research-communication impact will be concentrated in that small group of animal advocacy orgs who are relatively eager to change their minds based on research, and perhaps in an even smaller group (e.g. just OPP and ACE). Their interests do influence us to a large extent, not just because it's where they're more open to changing their minds, but because we see them as intellectual peers. There are some qualifications we account for, such as SI having a longer-term focus (in my personal opinion, not sure they'd agree) than OPP's farmed animal program or ACE. I'd say that the interests of less-impact-focused orgs are only a small factor, since the likelihood of change and potential magnitude of change seem quite small.

Comment author: jayquigley 08 June 2017 11:29:45PM 0 points [-]

I worry that SI will delineate lots of research questions usefully, but that it will be harder to make needed progress on those questions. Are you worried about this as well, and if so, are there steps to be taken here? One idea is promoting the research projects to graduate students in the social sciences, such as via grants or scholarships.

Comment author: thebestwecan 09 June 2017 01:48:47PM *  1 point [-]

The Foundational Summaries page is our only completed or planned project that was primarily intended to delineate research questions. Because of its fairly exhaustive nature, I (Jacy) think it does only have to be done once, and now our future research can just go into that page instead of needing to be repeatedly delineated, if that makes sense.

None of the projects in our research agenda are armchair projects, i.e. they all include empirical, real-world study and aggregation of data. You can also find me personally critiquing other EA research projects for being too much about delineation and armchair speculation, instead of doing empirical research. We have also noted that our niche as Sentience Institute within EAA is foundational research that expands the EAA evidence base. That is definitely our primary goal as an organization.

For all those reasons, I'm not very worried about us spending too much time on delineation. There's also just the question of whether these research questions are so difficult, at least to make concrete progress on, that our work will not be cost-effective even if such progress, if achieved, would be very impactful. That's my second biggest worry about SI's impact (biggest is that big decision-makers won't properly account for the research results). I don't think there's much to do to fix that concern besides working hard for the next few months or couple years and seeing what sort of results we can get. We've also had some foundational research from ACE, Open Phil, and other parties that seems to have been taken somewhat seriously by big EAA decision-makers, so that's promising.

We'd be open to giving grants or scholarships to relevant research projects done by graduate students in the social sciences. I don't think the demand for such funding and the amount of funding we could supply is such that it'd be cost-effective to set up a formal grants program at this time (we only have two staff and would like to get a lot of research done by December), but we'd be open to it. Two concerns that come to mind here are (i) academic research has a lot of limitations, especially when done by untenured junior researchers who have to worry a lot about publishing in top journals, matching their subject matter with the interests of professors, etc. (ii) part-time/partially-funded research is challenging, to the point that some EA organizations don't even think it's worth the time to have volunteers. There's a lot of administrative cost that could make it not cost-effective overall, and better to just hire full-time researchers.

Those concerns are mitigated by considerations like: (i) grad students, even at that early stage, could have valuable subject matter expertise. For example, I'm always on the prowl for someone who both knows a lot about the academic social movement literature and also approaches it with an EA perspective. I've found few people who have both features to a significant degree. (ii) some might be willing to do relevant research with only minimal amounts of funding and supervision, and that could be very low-hanging fruit. We have our Research Network for this sort of work, and we do hope to continue trying to capture low-hanging fruit with it.

Comment author: Ekaterina_Ilin 05 June 2017 07:32:45PM 2 points [-]

SP used to work on a research agenda with questions concerning sentience as a phenomenon itself, the list still resides here: SP former Research Agenda

SI's research is now much more advocacy centered, as you write:

Our mission is to build on the body of evidence for how to most effectively expand humanity’s moral circle, and to encourage advocates to make use of that evidence.

What is the reason for this strategic shift?

Comment author: thebestwecan 06 June 2017 02:09:34PM *  2 points [-]

We (Kelly and Jacy) weren't working at SP when its agenda was written, but my impression is that SP's research agenda was written to broadly encompass most questions relevant to reducing suffering, though this excluded some of the questions already prioritized by the Foundational Research Institute, another EAF project. I (Jacy) think the old SP agenda reflects EAF's tendency to keep doors open, as well as an emphasis on more foundational questions like the "sentience as a phenomenon itself" ones you mention here.

When we were founding SI, we knew we wanted to have a relatively narrow focus. This was the recommendation of multiple advocacy/EA leaders we asked for advice. We also wanted to have a research agenda that was relatively tractable (though of course we don't expect to have definitive answers to any of the big questions in EA in the near future), so we could have a shorter feedback loop on our research progress. As we improve our process, we'll lean more towards questions with longer feedback loops. We also think that the old SP agenda was not only broad in topic, but broad in the skills/expertise necessary for tackling its various projects. Narrowing the focus to advocacy/social change means there's more transferability of expertise between questions.

Finally, it seemed there had been a lot of talk in EA of values spreading as a distinct EA project, especially moral circle expansion, which arguably lies at the intersection of effective animal advocacy and existential risk/far future work, meaning it's been kind of homeless and could benefit from having its own organization.*

All of this led us to focus SI on moral circle expansion and the more narrow/tractable/concrete/empirical research agenda than that of the old SP.** We've considered keeping the old agenda around as a long-term/broad agenda while still focusing on the new one for day-to-day work. I think it's currently still up-in-the-air what exactly will happen to that document.

*The analogy that comes to mind here is cultured/clean meat, i.e. real meat grown from animal cells without slaughter. People in this field argue it's been heavily neglected relative to other scientific projects because it's at the intersection of food science (because the product is food) and medical science (because that's where tissue engineering has been most popular).

**We think even our current mission/agenda is very broad, which is why we have the even narrower focus on animal farming right now. We think that narrower focus could change in the next few years, but we expect SI as an organization to be focused on moral circle expansion for the long haul.

Comment author: Peter_Hurford  (EA Profile) 02 June 2017 07:06:09PM 5 points [-]

We’re committed to evaluating our own efforts and changing directions or even disbanding the organization if we determine that we can make a greater impact elsewhere. If our research fails to generate new, actionable insights that make a significant difference to advocates’ decisions, we plan to shift our priorities towards outreach and movement-building, such as the career content described above.

Your philosophy is very admirable. How do you plan to track this, more concretely?

Comment author: thebestwecan 02 June 2017 11:32:46PM 8 points [-]

We're planning to make predictions about movement progress (e.g. rate of corporate welfare reforms) as well as our own goals (e.g. amount of evidence our research generates (ideally will be evaluated by an external party), number of influential advocates who change their minds based on our research). This is similar to what ACE and OPP do, and I think other EA orgs. With the self-predictions, we've been thinking we'll have some lower bounds where if we're consistently underperforming, we'll note that on our Mistakes page and consider big-picture reprioritization such as doing more outreach work.

We're currently in touch with several influential animal advocates because of our survey for the Foundational Summaries page on how EAA researchers currently view the evidence on these questions. We've had positive feedback on it so far, above our expectations, but it'll give us a good first feedback loop to see if our work is useful.

I'd also note that we wanted to get a 'minimum viable product' out there ASAP, so lots of our specific plans are still up in the air. We're still very interested in feedback, and now that we've got the website published, we'll be able to spend more time on the nitty gritty. Full disclosure, we're still going to need to spend a lot of time fundraising in the coming weeks, and then of course we need to do the actual research, so I'm not sure how much we should prioritize progress-tracking relative to that work..

View more: Next