Comment author: Roxanne_Heston  (EA Profile) 02 October 2017 10:14:51AM 0 points [-]

Hm, we haven't considered this in particular, although we are considering alternative funding models. If you think we should prioritize setting something like this up, can you make the case for this over our current scheme or more general certificates of impact?

Comment author: RomeoStevens 03 October 2017 11:49:22PM 0 points [-]

I can't make a case for prioritization as I haven't been able to find enough data points for a reasonable base rate of expectations of effects from the incentive. Fqxi might have non public data on how their program has gone that they might be willing to share with cea. I'd probably also try reaching out to the John Templeton foundation, though they are less likely to engage. It is likely worth a short brainstorm of people who might know more about how prizes typically work out.

Comment author: RomeoStevens 02 October 2017 02:46:20AM 2 points [-]

Is cea considering awarding prizes to papers that advance core areas after the fact?

Comment author: RomeoStevens 07 September 2017 01:24:41AM *  0 points [-]

Lots of books about which direction you might want to self modify in. Are there good books about the outside view on self modification? What are metrics that people have tried? (And importantly what are popular ones we know don't work?) What effect sizes are reasonable given heritability and how can you measure them?

Even just a collection of which things with have evidence of high vs low malleability would be great.

What other considerations are relevant? This seems like 80ks wheelhouse.

Comment author: MikeJohnson 31 July 2017 05:32:40PM 0 points [-]

I think that's fair-- beneficial equilibriums could depend on reifying things like this.

On the other hand, I'd suggest that with regard to identifying entities that can suffer, false positives are much less harmful than false negatives but they still often incur a cost. E.g., I don't think corporations can suffer, so in many cases it'll be suboptimal to grant them the sorts of protections we grant humans, apes, dogs, and so on. Arguably, a substantial amount of modern ethical and perhaps even political dysfunction is due to not kicking leaky reifications out of our circle of caring. (This last bit is intended to be provocative and I'm not sure how strongly I'd stand behind it...)

Comment author: RomeoStevens 03 August 2017 09:42:16PM 0 points [-]

Yeah, S-risk minimizer being trivially exploitable etc.

Comment author: MikeJohnson 26 July 2017 06:33:54PM 2 points [-]

I take this as meaning that you agree that accepting functionalism is orthogonal to the question of whether suffering is "real" or not?

Ah, the opposite actually- my expectation is that if 'consciousness' isn't real, 'suffering' can't be real either.

What something better would look like - if I knew that, I'd be busy writing a paper about it. :-) That seems to be a part of the problem - everyone (that I know of) agrees that functionalism is deeply unsatisfactory, but very few people seem to have any clue of what a better theory might look like. Off the top of my head, I'd like such a theory to at least be able to offer some insight into what exactly is conscious, and not have the issue where you can hypothesize all kinds of weird computations (like Aaronson did in your quote) and be left confused about which of them are conscious and which are not, and why. (roughly, my desiderata are similar to Luke Muehlhauser's)

Thanks, this is helpful. :)

The following is tangential, but I thought you'd enjoy this Yuri Harari quote on abstraction and suffering:

In terms of power, it’s obvious that this ability [to create abstractions] made Homo sapiens the most powerful animal in the world, and now gives us control of the entire planet. From an ethical perspective, whether it was good or bad, that’s a far more complicated question. The key issue is that because our power depends on collective fictions, we are not good in distinguishing between fiction and reality. Humans find it very difficult to know what is real and what is just a fictional story in their own minds, and this causes a lot of disasters, wars and problems.

The best test to know whether an entity is real or fictional is the test of suffering. A nation cannot suffer, it cannot feel pain, it cannot feel fear, it has no consciousness. Even if it loses a war, the soldier suffers, the civilians suffer, but the nation cannot suffer. Similarly, a corporation cannot suffer, the pound sterling, when it loses its value, it doesn’t suffer. All these things, they’re fictions. If people bear in mind this distinction, it could improve the way we treat one another and the other animals. It’s not such a good idea to cause suffering to real entities in the service of fictional stories.

Comment author: RomeoStevens 30 July 2017 08:29:49AM 0 points [-]

The quote seems very myopic. Let's say that we have a religion X that has an excellent track record at preventing certain sorts of defections by helping people coordinate on enforcement costs. Suffering in the service of stabilizing this state of affairs may be the best use of resources in a given context.

Comment author: RomeoStevens 30 July 2017 08:22:46AM *  0 points [-]

If EAs have shitty lives many fewer people will become EAs. EAs should give up to the limit of their ability to have the same to slightly better lives than their peers by being more efficient with their money in purchasing happiness. Modulo other considerations such as reinvesting in human capital for giving later etc. This will also lead to greater productivity which is often too heavily discounted in people's calculation on the basis of the introspection illusion: thinking that their future self will be able to tank the hits from a crappier life for the sake of values than they actually will be able to.

Comment author: MikeJohnson 21 July 2017 07:37:03PM *  3 points [-]

Curious for your take on the premise that ontologies always have tacit telos.

Some ontologies seem to have more of a telos 'baked in'-- e.g., Christianity might be a good example-- whereas other ontologies have zero explicit telos-- e.g., pure mathematics.

But I think you're right that there's always a tacit telos, perhaps based on elegance. When I argue that "consciousness is a physics problem", I'm arguing that it inherits physics' tacit telos, which seems to be elegance-as-operationalized-by-symmetry.

I wonder if "elegance" always captures telos? This would indicate a certain theory-of-effective-social/personal-change...

Also, we desire to expand the domains of our perception with scientific instrumentation and abstractions. This expansion always generates some mapping (ontology) from the new data to our existing sensory modalities.

Yeah, it doesn't seem technology can ever truly be "teleologically neutral".

Comment author: RomeoStevens 22 July 2017 03:39:42PM *  0 points [-]

Elegance is probably worth exploring in the same way that moral descriptivism as a field turned up some interesting things. My naive take is something like 'efficient compression of signaling future abundance.'

Another frame for the problem: what is mathematical and scientific taste and how does it work?

Also, more efficient objection to religion: 'your compression scheme is lossy bro.' :D

Comment author: RomeoStevens 21 July 2017 08:41:12AM *  2 points [-]

that precisely mapping between physical processes and (Turing-level) computational processes is inherently impossible

Curious for your take on the premise that ontologies always have tacit telos.

Also, we desire to expand the domains of our perception with scientific instrumentation and abstractions. This expansion always generates some mapping (ontology) from the new data to our existing sensory modalities.

I think this is relevant for the dissonance model of suffering, though I can't fully articulate how yet.

Comment author: RomeoStevens 14 April 2017 05:47:02PM 9 points [-]

I just want to note that I think using common cultural touchstones like you do here is being WAY underappreciated in effectiveness points based on what I have seen. It makes me think more people should try stand-up like Bostrom did to get a flavor for communicating complex situations effectively.

Comment author: HoldenKarnofsky 30 March 2017 11:43:19PM 0 points [-]

The principles were meant as descriptions, not prescriptions.

I'm quite sympathetic to the idea expressed by your Herbert Simon quote. This is part of what I was getting at when I stated: "I think that one of the best ways to learn is to share one's impressions, even (especially) when they might be badly wrong. I wish that public discourse could include more low-caution exploration, without the risks that currently come with such things." But because the risks are what they are, I've concluded that public discourse is currently the wrong venue for this sort of thing, and it indeed makes more sense in the context of more private discussions. I suspect many others have reached a similar conclusion; I think it would be a mistake to infer someone's attitude toward low-stakes brainstorming from their public communications.

Comment author: RomeoStevens 31 March 2017 04:56:57AM 0 points [-]

I think it would be a mistake to infer someone's attitude toward low-stakes brainstorming from their public communications.

Most people wear their hearts on their sleeve to a greater degree than they might realize. Public conservatism of discourse seems a pretty reasonable proxy measure of private conservatism of discourse in most cases. As I mentioned, I am very happy to hear evidence this is not the case for openPhil.

I do not think the model of creativity as a deliberate, trainable set of practices is widely known, so I go out of my way to bring it up WRT projects that are important.

View more: Next