Comment author: SoerenMind  (EA Profile) 25 July 2017 11:27:33AM *  0 points [-]

Great talk!

Given the value that various blogs and other online discussion has provided to the EA community I'm a bit surprised by the relative absence of 'advancing the state of community knowledge by writing etc' in 80k's advice. In fact, I've found that the advice to build lots of career capital and fill every gap with an internship has discouraged me from such activities in the past.

Comment author: SoerenMind  (EA Profile) 25 July 2017 11:15:43AM 0 points [-]

I see quite a bunch of relevant cognitive science work these days, e.g. this: http://saxelab.mit.edu/resources/papers/Kleiman-Weiner.etal.2017.pdf

Comment author: MikeJohnson 21 July 2017 06:57:35PM 3 points [-]

Hi Sören- your general point (am I critiquing FRI, or functionalism?) is reasonable. I do note in the piece why I focus on FRI:

Note: FRI is not the only EA organization which holds functionalist views on consciousness; much of the following critique would also apply to e.g. MIRI, FHI, and OpenPhil. I focus on FRI because (1) Brian’s writings on consciousness & functionalism have been hugely influential in the community, and are clear enough to criticize; (2) the fact that FRI is particularly clear about what it cares about- suffering- allows a particularly clear critique about what problems it will run into with functionalism; (3) I believe FRI is at the forefront of an important cause area which has not crystallized yet, and I think it’s critically important to get these objections bouncing around this subcommunity.

I should say too that the purpose of bringing up QRI's work is not to suggest FRI should be focusing on this, but instead that effort developing alternatives helps calibrate the field:

I mention all this because I think analytic functionalism- which is to say radical skepticism/eliminativism, the metaphysics of last resort- only looks as good as it does because nobody’s been building out any alternatives.

Comment author: SoerenMind  (EA Profile) 22 July 2017 11:07:08AM 0 points [-]

Makes sense :)

Comment author: kbog  (EA Profile) 21 July 2017 04:20:14PM *  6 points [-]

I think the choice of a metaethical view is less important than you think. Anti-realism is frequently a much richer view than just talking about preferences. It says that our moral statements aren't truth-apt, but just because our statements aren't truth-apt doesn't mean they're merely about preferences. Anti-realists can give accounts of why a rigorous moral theory is justified and is the right one to follow, not much different from how realists can. Conversely, you could even be a moral realist who believes that moral status boils down to which computations you happen to care about. Anyway, the point is that anti-realists can take pretty much any view in normative ethics, and justify those views in mostly the same ways that realists tend to justify their views (i.e. reasons other than personal preference). Just because we're not talking about whether a moral principle is true or not doesn't mean that we can no longer use the same basic reasons and arguments in favor of or against that principle. Those reasons will just have a different meaning.

Plus, physicalism is a weaker assertion than the view that consciousness is merely a matter of computation or information processing. Consciousness could be reducible to physical phenomena but without being reducible to computational steps. (eta: this is probably what most physicalists think.)

Comment author: SoerenMind  (EA Profile) 22 July 2017 10:59:39AM 2 points [-]

Thanks, for the clarification, I can't comment much as I don't know much about the different flavors or anti realism.

One thing I'd like to point out, and I'm happy to be corrected on that, is that when an anti realist argues they will often (always?) base themselves on principles such as consistency. It seems hard to argue anything without referring to any principle. But someone who who doesn't support the application of a principle won't be convinced and that's up to preferences too. (I certainly know people who reject the drowning child argument because they explicitly don't care about consistency). So you could see debate about ethics because people are exploring the implications of principles they happen to share.

Agree on physicalism being a fairly general set of views.

Comment author: kbog  (EA Profile) 21 July 2017 03:35:07PM *  1 point [-]

What's the problem if a group of people explores the implications of a well-respected position in philosophy and are (I think) fully aware of the implications?

If the position is wrong then their work is of little use, or possibly harmful. FRI is a nonprofit organization affiliated with EA which uses nontrivial amounts of human and financial capital, of course it's a problem if the work isn't high value.

I wouldn't be so quick to assume that the idea that moral status boils down to asking 'which computations do I care about' is a well-respected position in philosophy. It probably exists but not in substantial measure.

Comment author: SoerenMind  (EA Profile) 21 July 2017 04:03:08PM 6 points [-]

As far as I can see that's just functionalism / physicalism plus moral anti-realism which are both well-respected. But as philosophy of mind and moral philosophy are separate fields you won't see much discussion of the intersection of these views. Completely agreed if you do assume the position is wrong.

Comment author: Gregory_Lewis 21 July 2017 12:28:45PM *  0 points [-]

Mea culpa. I was naively thinking of super-imposing the 'previous' axes. I hope the underlying worry still stands given the arbitrarily many sets of mathematical objects which could be reversibly mapped onto phenomenological states, but perhaps this betrays a deeper misunderstanding.

Comment author: SoerenMind  (EA Profile) 21 July 2017 02:23:16PM *  2 points [-]

I'll assume you meant isomorphically mapped rather than reversibly mapped, otherwise there's indeed a lot of random things you can map anything.

I tend to think of isomorphic objects as equivalent in every way that can be mathematically described (and that includes every way I could think of). However, objects can be made of different elements so the equivalence is only after stripping away all information about the elements and seeing them as abstract entities that relate to each other in some way. So you could get {Paris, Rome, London} == {1,2,3}. What Mike is getting at though I think is that the elements also have to be isomorphic all the way down - then I can't think of a reason to not see such completely isomorphic objects as the same.

Comment author: Brian_Tomasik 21 July 2017 01:00:53PM 4 points [-]

Including relevant work on modelling valence

Cool. :) I found that article enlightening and discussed it on pp. 20-21 of my RL paper.

Comment author: SoerenMind  (EA Profile) 21 July 2017 02:15:01PM 2 points [-]

One of the authors (Peter Dayan) is my supervisor, let me know if you'd like me to ask him anything, he does a lot of RL-style modelling :)

Comment author: SoerenMind  (EA Profile) 21 July 2017 12:11:10PM *  11 points [-]

What's the problem if a group of people explores the implications of a well-respected position in philosophy and are (I think) fully aware of the implications? Exploring a different position should be a task for people who actually place more than a tiny bit of credence in it, it seems to me - especially when it comes to a new and speculative hypothesis like principle qualia.

This post mostly reads like a contribution to a long-standing philosophical debate to me and would be more appropriately presented as arguing against a philosophical assumption rather than against a research group working under that assumption.

In the cog-sci / neuroscience institute where I currently work, productive work is being done under similar, though less explicit, assumptions as Brian's / FRI's. Including relevant work on modelling valence in animals in the reinforcement learning framework.

I know you disagree with these assumptions but a post like this can make it seem to outsiders as if you're criticizing a somewhat crazy position and by extension cast a bad light on FRI.

Comment author: Gregory_Lewis 21 July 2017 08:50:04AM *  0 points [-]

Aside:

Essentially, the STV is an argument that much of the apparent complexity of emotional valence is evolutionarily contingent, and if we consider a mathematical object isomorphic to a phenomenological experience, the mathematical property which corresponds to how pleasant it is to be that experience is the object’s symmetry.

I don't see how this can work given (I think) isomorphism is transitive and there are lots of isomorphisms between sets of mathematical objects which will not preserve symmetry.

Toy example. Say we can map the set of all phenomenological states (P) onto 2D shapes (S), and we hypothesize their valence corresponds to their symmetry along the y=0 plane. Now suppose an arbitrary shear transformation applied to every member of S, giving S!. P (we grant) is isomorphic to S. Yet S! is isomorphic to S, and therefore also isomorphic to P; and the members of S and S! which are symmetrical differ. So which set of shapes should we use?

Comment author: SoerenMind  (EA Profile) 21 July 2017 11:55:26AM 2 points [-]

Trivial objection, but the y=0 axis also gets transformed so the symmetries are preserved. In maths, symmetries aren't usually thought of as depending on some specific axis. E.g. the symmetry group of a cube is the same as the symmetry group of a rotated version of the cube.

Comment author: SoerenMind  (EA Profile) 08 June 2017 09:00:41PM *  3 points [-]

I got linked here while browsing a pretty random blog on deep learning, you're getting attention! (https://medium.com/intuitionmachine/seven-deadly-sins-and-ai-safety-5601ae6932c3)

View more: Next