Comment author: MikeJohnson 31 July 2017 05:32:40PM 0 points [-]

I think that's fair-- beneficial equilibriums could depend on reifying things like this.

On the other hand, I'd suggest that with regard to identifying entities that can suffer, false positives are much less harmful than false negatives but they still often incur a cost. E.g., I don't think corporations can suffer, so in many cases it'll be suboptimal to grant them the sorts of protections we grant humans, apes, dogs, and so on. Arguably, a substantial amount of modern ethical and perhaps even political dysfunction is due to not kicking leaky reifications out of our circle of caring. (This last bit is intended to be provocative and I'm not sure how strongly I'd stand behind it...)

Comment author: RomeoStevens 03 August 2017 09:42:16PM 0 points [-]

Yeah, S-risk minimizer being trivially exploitable etc.

Comment author: MikeJohnson 26 July 2017 06:33:54PM 2 points [-]

I take this as meaning that you agree that accepting functionalism is orthogonal to the question of whether suffering is "real" or not?

Ah, the opposite actually- my expectation is that if 'consciousness' isn't real, 'suffering' can't be real either.

What something better would look like - if I knew that, I'd be busy writing a paper about it. :-) That seems to be a part of the problem - everyone (that I know of) agrees that functionalism is deeply unsatisfactory, but very few people seem to have any clue of what a better theory might look like. Off the top of my head, I'd like such a theory to at least be able to offer some insight into what exactly is conscious, and not have the issue where you can hypothesize all kinds of weird computations (like Aaronson did in your quote) and be left confused about which of them are conscious and which are not, and why. (roughly, my desiderata are similar to Luke Muehlhauser's)

Thanks, this is helpful. :)

The following is tangential, but I thought you'd enjoy this Yuri Harari quote on abstraction and suffering:

In terms of power, it’s obvious that this ability [to create abstractions] made Homo sapiens the most powerful animal in the world, and now gives us control of the entire planet. From an ethical perspective, whether it was good or bad, that’s a far more complicated question. The key issue is that because our power depends on collective fictions, we are not good in distinguishing between fiction and reality. Humans find it very difficult to know what is real and what is just a fictional story in their own minds, and this causes a lot of disasters, wars and problems.

The best test to know whether an entity is real or fictional is the test of suffering. A nation cannot suffer, it cannot feel pain, it cannot feel fear, it has no consciousness. Even if it loses a war, the soldier suffers, the civilians suffer, but the nation cannot suffer. Similarly, a corporation cannot suffer, the pound sterling, when it loses its value, it doesn’t suffer. All these things, they’re fictions. If people bear in mind this distinction, it could improve the way we treat one another and the other animals. It’s not such a good idea to cause suffering to real entities in the service of fictional stories.

Comment author: RomeoStevens 30 July 2017 08:29:49AM 0 points [-]

The quote seems very myopic. Let's say that we have a religion X that has an excellent track record at preventing certain sorts of defections by helping people coordinate on enforcement costs. Suffering in the service of stabilizing this state of affairs may be the best use of resources in a given context.

Comment author: RomeoStevens 30 July 2017 08:22:46AM *  0 points [-]

If EAs have shitty lives many fewer people will become EAs. EAs should give up to the limit of their ability to have the same to slightly better lives than their peers by being more efficient with their money in purchasing happiness. Modulo other considerations such as reinvesting in human capital for giving later etc. This will also lead to greater productivity which is often too heavily discounted in people's calculation on the basis of the introspection illusion: thinking that their future self will be able to tank the hits from a crappier life for the sake of values than they actually will be able to.

Comment author: MikeJohnson 21 July 2017 07:37:03PM *  3 points [-]

Curious for your take on the premise that ontologies always have tacit telos.

Some ontologies seem to have more of a telos 'baked in'-- e.g., Christianity might be a good example-- whereas other ontologies have zero explicit telos-- e.g., pure mathematics.

But I think you're right that there's always a tacit telos, perhaps based on elegance. When I argue that "consciousness is a physics problem", I'm arguing that it inherits physics' tacit telos, which seems to be elegance-as-operationalized-by-symmetry.

I wonder if "elegance" always captures telos? This would indicate a certain theory-of-effective-social/personal-change...

Also, we desire to expand the domains of our perception with scientific instrumentation and abstractions. This expansion always generates some mapping (ontology) from the new data to our existing sensory modalities.

Yeah, it doesn't seem technology can ever truly be "teleologically neutral".

Comment author: RomeoStevens 22 July 2017 03:39:42PM *  0 points [-]

Elegance is probably worth exploring in the same way that moral descriptivism as a field turned up some interesting things. My naive take is something like 'efficient compression of signaling future abundance.'

Another frame for the problem: what is mathematical and scientific taste and how does it work?

Also, more efficient objection to religion: 'your compression scheme is lossy bro.' :D

Comment author: RomeoStevens 21 July 2017 08:41:12AM *  2 points [-]

that precisely mapping between physical processes and (Turing-level) computational processes is inherently impossible

Curious for your take on the premise that ontologies always have tacit telos.

Also, we desire to expand the domains of our perception with scientific instrumentation and abstractions. This expansion always generates some mapping (ontology) from the new data to our existing sensory modalities.

I think this is relevant for the dissonance model of suffering, though I can't fully articulate how yet.

Comment author: RomeoStevens 14 April 2017 05:47:02PM 9 points [-]

I just want to note that I think using common cultural touchstones like you do here is being WAY underappreciated in effectiveness points based on what I have seen. It makes me think more people should try stand-up like Bostrom did to get a flavor for communicating complex situations effectively.

Comment author: HoldenKarnofsky 30 March 2017 11:43:19PM 0 points [-]

The principles were meant as descriptions, not prescriptions.

I'm quite sympathetic to the idea expressed by your Herbert Simon quote. This is part of what I was getting at when I stated: "I think that one of the best ways to learn is to share one's impressions, even (especially) when they might be badly wrong. I wish that public discourse could include more low-caution exploration, without the risks that currently come with such things." But because the risks are what they are, I've concluded that public discourse is currently the wrong venue for this sort of thing, and it indeed makes more sense in the context of more private discussions. I suspect many others have reached a similar conclusion; I think it would be a mistake to infer someone's attitude toward low-stakes brainstorming from their public communications.

Comment author: RomeoStevens 31 March 2017 04:56:57AM 0 points [-]

I think it would be a mistake to infer someone's attitude toward low-stakes brainstorming from their public communications.

Most people wear their hearts on their sleeve to a greater degree than they might realize. Public conservatism of discourse seems a pretty reasonable proxy measure of private conservatism of discourse in most cases. As I mentioned, I am very happy to hear evidence this is not the case for openPhil.

I do not think the model of creativity as a deliberate, trainable set of practices is widely known, so I go out of my way to bring it up WRT projects that are important.

Comment author: HoldenKarnofsky 01 March 2017 06:32:45PM 5 points [-]

Thanks for the thoughts!

I'm not sure I fully understand what you're advocating. You talk about "only selectively engag[ing] with criticism" but I'm not sure whether you are in favor of it or against it. FWIW, this post is largely meant to help understand why I only selectively engage with criticism.

I agree that "we should be skeptical of our stories about why we do things, even after we try to correct for this." I'm not sure that the reasons I've given are the true ones, but they are my best guess. I note that the reasons I give here aren't necessarily very different from the reasons others making similar transitions would give privately.

I also agree that there is a significant risk that my views will calcify. I worry about this a fair amount, and I am interested in potential solutions, but at this point I believe that public discourse is not promising as a potential solution, for reasons outlined above. I think there is a bit of a false dichotomy between "engage in public discourse" and "let one's views calcify"; unfortunately I think the former does little to prevent the latter.

I don't understand the claim that "The principles section is an outline of a potential future straightjacket." Which of the principles in that section do you have in mind?

Comment author: RomeoStevens 28 March 2017 07:22:54PM *  0 points [-]

Whoops, I somehow didn't see this until now. Scattered EA discourse, shrug.

I am in support of only engaging selectively.

I also agree that there is a significant risk that my views will calcify. I worry about this a fair amount, and I am interested in potential solutions,

great!

I think there is a bit of a false dichotomy between "engage in public discourse" and "let one's views calcify"; unfortunately I think the former does little to prevent the latter.

agreed

I don't understand the claim that "The principles section is an outline of a potential future straightjacket." Which of the principles in that section do you have in mind?

the whole thing. Principles are better as descriptions and not prescriptions :)

WRT preventing views from calcifying, I think it is very very important to actively cultivate something similar to

"But we ran those conversations with the explicit rule that one could talk nonsensically and vaguely, but without criticism unless you intended to talk accurately and sensibly. We could try out ideas that were half-baked or quarter-baked or not baked at all, and just talk and listen and try them again." -Herbert Simon, Nobel Laureate, founding father of the AI field

I've been researching top and breakout performance and this sort of thing keeps coming up again and again. Fortunately, creative reasoning is not magic. It has been studied and has some parameters that can be intentionally inculcated.

This talk gives a brief overview: https://vimeo.com/89936101

And I recommend skimming one of Edward deBono's books, such as six thinking hats. He outlined much of the sort of reasoning of 0 to 1, the Lean Startup, and others way back in the early nineties. It may be that openPhil is already having such conversations internally. In which case, great! That would make me much more bullish on the idea that openPhil has a chance at outsize impact. My main proxy metric is an Umeshism: if you never output any batshit crazy ideas your process is way too conservative.

Comment author: Zeke_Sherman 28 March 2017 05:26:15PM *  1 point [-]

Optimizing for a narrower set of criteria allows more optimization power to be put behind each member of the set. I think it is plausible that those who wish to do the most good should put their optimization power behind a single criteria, as that gives it some chance to actually succeed.

Only if you assume that there are high thresholds for achievements.

The best candidate afaik is right to exit, as it eliminates the largest possible number of failure modes in the minimum complexity memetic payload.

I do not understand what you are saying.

Edit: do you mean, the option to get rid of technological developments and start from scratch? I don't think there's any likelihood of that, it runs directly counter to all the pressures described in my post.

Comment author: RomeoStevens 28 March 2017 07:07:18PM 2 points [-]

right to exit means right to suicide, right to exit geographically, right to not participate in a process politically etc.

In response to Utopia In The Fog
Comment author: RomeoStevens 28 March 2017 08:31:18AM 3 points [-]

Optimizing for a narrower set of criteria allows more optimization power to be put behind each member of the set. I think it is plausible that those who wish to do the most good should put their optimization power behind a single criteria, as that gives it some chance to actually succeed. The best candidate afaik is right to exit, as it eliminates the largest possible number of failure modes in the minimum complexity memetic payload. Interested in arguments why this might be wrong.

View more: Next