Comment author: MikeJohnson 20 November 2017 08:46:51PM *  4 points [-]

This is a wonderful overview. I especially appreciated the notes about possible biases in each study.

My expectation is that the "mental health tech" field is also worth keeping an eye on, although it's often characterized by big claims and not a lot of supporting data. I'm cautiously optimistic that an app like UpLift (Spencer Greenberg et. al) might be able to improve upon existing self-administered CBT options.

There have also been a lot of promising developments in neuroscience and 'applied philosophy of mind', and if there are ways of turning these into technology, it seems plausible we could start to see some "10x results". Better ways to understand what's going on in brains will lead to better tools to fix them when they break.

The two paradigms I find most intriguing here are

  • the predictive coding / free energy paradigm (primary work by Karl Friston, Anil K. Seth, Andy Clark; for a nice summary see SSC's book review of Surfing Uncertainty and 'toward a predictive theory of depression' - also, Adam Safron is an EA who really knows his stuff here, and would be a good person to talk to about how predictive coding models could help inform mental health interventions)

  • the connectome-specific harmonic wave paradigm (primary work by Selen Atasoy; for a nice summary see this video&transcript - this has informed much of QRI's thinking about mental health)

I'd also love to survey other peoples' intuitions on what neuroscience work they think could lead to a '10x breakthrough' in mental health tech.

Comment author: RomeoStevens 20 November 2017 09:23:25PM *  4 points [-]

Two areas I think are most promising off the top of my head (held lightly)

  1. Continuing connectome work with advanced meditators. This kind of research has been ongoing at various institutes for the last decade. It would be nice to get a consistent pipeline of funding to enable less stop-start.

  2. Triaging of people into mental health interventions. By paying too much attention to mean effect size in aggregates of treatment populations we are potentially ignoring large effect sizes in restricted treatment populations. Gathering data on outcome distribution shapes and attempting to do some hypothesis exploration on what hidden features are making certain people high responders to certain interventions could be incredibly high returns.

Comment author: RomeoStevens 05 November 2017 05:53:16AM -1 points [-]

Geographical distance is a kind of inferential distance.

Comment author: Roxanne_Heston  (EA Profile) 02 October 2017 10:14:51AM 0 points [-]

Hm, we haven't considered this in particular, although we are considering alternative funding models. If you think we should prioritize setting something like this up, can you make the case for this over our current scheme or more general certificates of impact?

Comment author: RomeoStevens 03 October 2017 11:49:22PM 0 points [-]

I can't make a case for prioritization as I haven't been able to find enough data points for a reasonable base rate of expectations of effects from the incentive. Fqxi might have non public data on how their program has gone that they might be willing to share with cea. I'd probably also try reaching out to the John Templeton foundation, though they are less likely to engage. It is likely worth a short brainstorm of people who might know more about how prizes typically work out.

Comment author: RomeoStevens 02 October 2017 02:46:20AM 2 points [-]

Is cea considering awarding prizes to papers that advance core areas after the fact?

Comment author: RomeoStevens 07 September 2017 01:24:41AM *  0 points [-]

Lots of books about which direction you might want to self modify in. Are there good books about the outside view on self modification? What are metrics that people have tried? (And importantly what are popular ones we know don't work?) What effect sizes are reasonable given heritability and how can you measure them?

Even just a collection of which things with have evidence of high vs low malleability would be great.

What other considerations are relevant? This seems like 80ks wheelhouse.

Comment author: MikeJohnson 31 July 2017 05:32:40PM 0 points [-]

I think that's fair-- beneficial equilibriums could depend on reifying things like this.

On the other hand, I'd suggest that with regard to identifying entities that can suffer, false positives are much less harmful than false negatives but they still often incur a cost. E.g., I don't think corporations can suffer, so in many cases it'll be suboptimal to grant them the sorts of protections we grant humans, apes, dogs, and so on. Arguably, a substantial amount of modern ethical and perhaps even political dysfunction is due to not kicking leaky reifications out of our circle of caring. (This last bit is intended to be provocative and I'm not sure how strongly I'd stand behind it...)

Comment author: RomeoStevens 03 August 2017 09:42:16PM 0 points [-]

Yeah, S-risk minimizer being trivially exploitable etc.

Comment author: MikeJohnson 26 July 2017 06:33:54PM 2 points [-]

I take this as meaning that you agree that accepting functionalism is orthogonal to the question of whether suffering is "real" or not?

Ah, the opposite actually- my expectation is that if 'consciousness' isn't real, 'suffering' can't be real either.

What something better would look like - if I knew that, I'd be busy writing a paper about it. :-) That seems to be a part of the problem - everyone (that I know of) agrees that functionalism is deeply unsatisfactory, but very few people seem to have any clue of what a better theory might look like. Off the top of my head, I'd like such a theory to at least be able to offer some insight into what exactly is conscious, and not have the issue where you can hypothesize all kinds of weird computations (like Aaronson did in your quote) and be left confused about which of them are conscious and which are not, and why. (roughly, my desiderata are similar to Luke Muehlhauser's)

Thanks, this is helpful. :)

The following is tangential, but I thought you'd enjoy this Yuri Harari quote on abstraction and suffering:

In terms of power, it’s obvious that this ability [to create abstractions] made Homo sapiens the most powerful animal in the world, and now gives us control of the entire planet. From an ethical perspective, whether it was good or bad, that’s a far more complicated question. The key issue is that because our power depends on collective fictions, we are not good in distinguishing between fiction and reality. Humans find it very difficult to know what is real and what is just a fictional story in their own minds, and this causes a lot of disasters, wars and problems.

The best test to know whether an entity is real or fictional is the test of suffering. A nation cannot suffer, it cannot feel pain, it cannot feel fear, it has no consciousness. Even if it loses a war, the soldier suffers, the civilians suffer, but the nation cannot suffer. Similarly, a corporation cannot suffer, the pound sterling, when it loses its value, it doesn’t suffer. All these things, they’re fictions. If people bear in mind this distinction, it could improve the way we treat one another and the other animals. It’s not such a good idea to cause suffering to real entities in the service of fictional stories.

Comment author: RomeoStevens 30 July 2017 08:29:49AM 0 points [-]

The quote seems very myopic. Let's say that we have a religion X that has an excellent track record at preventing certain sorts of defections by helping people coordinate on enforcement costs. Suffering in the service of stabilizing this state of affairs may be the best use of resources in a given context.

Comment author: RomeoStevens 30 July 2017 08:22:46AM *  0 points [-]

If EAs have shitty lives many fewer people will become EAs. EAs should give up to the limit of their ability to have the same to slightly better lives than their peers by being more efficient with their money in purchasing happiness. Modulo other considerations such as reinvesting in human capital for giving later etc. This will also lead to greater productivity which is often too heavily discounted in people's calculation on the basis of the introspection illusion: thinking that their future self will be able to tank the hits from a crappier life for the sake of values than they actually will be able to.

Comment author: MikeJohnson 21 July 2017 07:37:03PM *  3 points [-]

Curious for your take on the premise that ontologies always have tacit telos.

Some ontologies seem to have more of a telos 'baked in'-- e.g., Christianity might be a good example-- whereas other ontologies have zero explicit telos-- e.g., pure mathematics.

But I think you're right that there's always a tacit telos, perhaps based on elegance. When I argue that "consciousness is a physics problem", I'm arguing that it inherits physics' tacit telos, which seems to be elegance-as-operationalized-by-symmetry.

I wonder if "elegance" always captures telos? This would indicate a certain theory-of-effective-social/personal-change...

Also, we desire to expand the domains of our perception with scientific instrumentation and abstractions. This expansion always generates some mapping (ontology) from the new data to our existing sensory modalities.

Yeah, it doesn't seem technology can ever truly be "teleologically neutral".

Comment author: RomeoStevens 22 July 2017 03:39:42PM *  0 points [-]

Elegance is probably worth exploring in the same way that moral descriptivism as a field turned up some interesting things. My naive take is something like 'efficient compression of signaling future abundance.'

Another frame for the problem: what is mathematical and scientific taste and how does it work?

Also, more efficient objection to religion: 'your compression scheme is lossy bro.' :D

Comment author: RomeoStevens 21 July 2017 08:41:12AM *  2 points [-]

that precisely mapping between physical processes and (Turing-level) computational processes is inherently impossible

Curious for your take on the premise that ontologies always have tacit telos.

Also, we desire to expand the domains of our perception with scientific instrumentation and abstractions. This expansion always generates some mapping (ontology) from the new data to our existing sensory modalities.

I think this is relevant for the dissonance model of suffering, though I can't fully articulate how yet.

View more: Next