Comment author: Wei_Dai 20 July 2017 07:50:49PM 15 points [-]

What lazy solutions will look like seems unpredictable to me. Suppose someone in the future wants to realistically roleplay a historical or fantasy character. The lazy solution might be to simulate a game world with conscious NPCs. The universe contains so much potential for computing power (which presumably can be turned into conscious experiences), that even if a very small fraction of people do this (or other things whose lazy solutions happen to involve suffering), that could create an astronomical amount of suffering.

Comment author: JesseClifton 20 July 2017 10:02:22PM 8 points [-]

Lazy solutions to problems of motivating, punishing, and experimenting on digital sentiences could also involve astronomical suffering.

Comment author: WillPearson 09 July 2017 10:07:31AM 0 points [-]

My own 2 cents. It depends a bit what form of general intelligence is made first. There are at least two possible models.

  1. Super intelligent agent with a specified goal
  2. External brain lobe

With the first you need to be able to specify a human preferences in the form of a goal. Which enables it to pick the right actions.

The external brain lobe would start not very powerful and not come with any explicit goals but would be hooked into the human motivational system and develop goals shaped by human preferences.

HRAD is explicitly about the first. I would like both to be explored.

Comment author: JesseClifton 09 July 2017 05:17:07PM *  0 points [-]

Right, I'm asking how useful or dangerous your (1) could be if it didn't have very good models of human psychology - and therefore didn't understand things like "humans don't want to be killed".

Comment author: JesseClifton 07 July 2017 10:13:46PM 1 point [-]

Great piece, thank you.

Regarding "learning to reason from humans", to what extent do you think having good models of human preferences is a prerequisite for powerful (and dangerous) general intelligence?

Of course, the motivation to act on human preferences is another matter - but I wonder if at least the capability comes by default?

Comment author: JesseClifton 09 June 2017 05:53:20PM 0 points [-]

Have animal advocacy organizations expressed interest in using SI's findings to inform strategic decisions? To what extent will your choices of research questions be guided by the questions animal advocacy orgs say they're interested in?

Comment author: JesseClifton 05 June 2017 04:32:21PM 5 points [-]

Strong agreement. Considerations from cognitive science might also help us to get a handle on how difficult the problem of general intelligence is, and the limits of certain techniques (e.g. reinforcement learning). This could help clarify our thinking on AI timelines as well as the constraints which any AGI must satisfy. Misc. topics that jump to mind are the mental modularity debate, the frame problem, and insight problem solving.

This is a good article on AI from a cog sci perspective: https://arxiv.org/pdf/1604.00289.pdf

Comment author: Chriswaterguy 10 April 2017 09:30:54AM *  0 points [-]

Thanks for the analysis.

A point of disagreement: "if cultured meat is at least as expensive as farmed animal meat, it is doubtful that a substantial fraction of consumers would substitute, given the current unwillingness to replace animal products with similar vegan substitutes."

Not at all comparable. I have yet to find plant-based meat substitutes that I want to eat. They either taste disappointing compared to meat, leave me with pains in the gut, fail to meet my nutritional expectations, or a combination of the three. I actually prefer tofu or beans.

If cultured meat were available, even at twice the price of regular meat and in restricted varieties (e.g. only beef mince), I would buy it.

As Denkenberger says in their comment:

if meat substitutes actually tasted better or were more healthful, this could be a scenario with strong spontaneous adoption.

Comment author: JesseClifton 12 April 2017 11:57:37AM 0 points [-]

Yes, I think you're right, at least when prices are comparable.

Comment author: Linch 09 February 2017 07:21:00AM *  2 points [-]

I'm not sure how you're operationalizing the difference between unlikely and very unlikely, but I think we should not be able to make sizable updates from this data unless the prior is REALLY big.

(You probably already understand this, but other people might read your comment as suggesting something more strongly than you're actually referring to, and this is a point that I really wanted to clarify anyway because I expect it to be a fairly common mistake)

Roughly: Unsurprising conclusions from experiments with low sample sizes should not change your mind significantly, regardless of what your prior beliefs are.

This is true (mostly) regardless of the size of your prior. If a null result when you have a high prior wouldn't cause a large update downwards, then a null result on something when you have a low prior shouldn't cause a large shift downwards either.

[Math with made-up numbers below]

As mentioned earlier:

If your hypothesis is 10%: 23% probability experiment confirms it.

If your hypothesis is 1%: 87% probability experiment is in line with this

5%: 49%

20%: 4.4%

Say your prior belief is that there's a 70% chance of talking to new people having no effect (or meaningfully close enough to zero that it doesn't matter), a 25% chance that it has a 1% effect, and a 5% chance that it has a 10% effect.

Then by Bayes' Theorem, your posterior probability should be: 75.3% chance it has no effect

23.4% chance it has a 1% effect

1.24% chance it has a 10% effect.

If, on the other hand, you originally believed that there's a 50% chance of it have no effect, and a 50% chance of it having a 10% effect, then your posterior should be:

81.3% chance it has no effect

18.7% chance it has a 10% effect.

Finally, if your prior is that it already has a relatively small effect, this study is far too underpowered to basically make any conclusions at all. For example, if you originally believed that there's a 70% chance of it having no effect, and a 30% chance of it having a .1% effect, then your posterior should be:

70.3% chance of no effect

29.7% chance of a .1% effect.

This is all assuming ideal conditions.Model uncertainty and uncertainty about the quality of my experiment should only decrease the size of your update, not increase it.

Do you agree here? If so, do you think I should rephrase the original post to make this clearer?

Comment author: JesseClifton 09 February 2017 08:11:47PM *  3 points [-]

More quick Bayes: Suppose we have a Beta(0.01, 0.32) prior on the proportion of people who will pledge. I choose this prior because it gives a point-estimate of a ~3% chance of pledging, and a probability of ~95% that the chance of pledging is less than 10%, which seems prima facie reasonable.

Updating on your data using a binomial model yields a Beta(0.01, 0.32 + 14) distribution, which gives a point estimate of < 0.1% and a ~99.9% probability that the true chance of pledging is less than 10%.

Comment author: JesseClifton 02 December 2016 10:09:07PM 2 points [-]

Thanks for writing this up.

The estimated differences due to treatment are almost certainly overestimates due to the statistical significance filter (http://andrewgelman.com/2011/09/10/the-statistical-significance-filter/) and social desirability bias.

For this reason and the other caveats you gave, it seems like it would be better to frame these as loose upper bounds on the expected effect, rather than point estimates. I get the feeling people often forget the caveats and circulate conclusions like "This study shows that $1 donations to newspaper ads save 3.1 chickens on average".

I continue to question whether these studies are worthwhile. Even if it did not find significant differences between the treatments and control, it's not as if we're going to stop spreading pro-animal messages. And it was not powered to detect the treatment differences in which you are interested. So it seems it was unlikely to be action-guiding from the start. And of course there's no way to know how much of the effect is explained by social desirability bias.

Comment author: Benito 12 November 2016 12:41:33PM *  1 point [-]

Also, "having our shit together in the long run" surely includes anti-speciesism (or at least much higher moral consideration for animals). Since EAs are some of the only people strategically working to spread anti-speciesism, it seems that this remains highly valuable on the margin.

I'd like to see an analysis of exactly what the opportunity costs are there, before endorsing one. This analysis has no differential analysis, and as such it reads "There are many important things being neglected. This is an important thing. Therefore it is the most important thing to do."

Comment author: JesseClifton 12 November 2016 03:00:11PM 1 point [-]

...as such it reads "There are many important things being neglected. This is an important thing. Therefore it is the most important thing to do."

I never meant to say that spreading anti-speciesism is the most important thing, just that it's still very important and it's not obvious that its relative value has changed with the election.

Comment author: Qiaochu_Yuan 11 November 2016 09:38:43PM *  8 points [-]

The case is for defending the conditions under which it's even possible to have a group of privileged people sitting around worrying about animal advocacy while the world is burning. To the extent that you think 1) Trump is a threat to democratic norms (as described e.g. by Julia Galef )/ risks nuclear war etc. and isn't just a herald of more conservative policy, and 2) most liberals galvanized by the threat of Trump are worrying more about the latter than the former, there's room for EAs to be galvanized by the threat of Trump in a more bipartisan way, as described e.g. by Paul Christiano.

(In general, my personal position on animal advocacy is that the long-term future of animals on Earth is determined almost entirely by how much humans have their shit together in the long run, and that I find it very difficult to justify working directly to save animals now relative to working to help humans get their shit more together.)

Comment author: JesseClifton 11 November 2016 11:08:00PM *  1 point [-]

Trump may represent an increased threat to democratic norms and x-risk, but that doesn't mean the marginal value of working in those areas has changed. Perhaps it has. We'd need to see concrete examples of how EAs who previously had a comparative advantage in helping animals now can do better by working on these other things.

my personal position on animal advocacy is that the long-term future of animals on Earth is determined almost entirely by how much humans have their shit together in the long run

This may be true of massive systemic changes for animals like the abolition of factory farming or large-scale humanitarian intervention in nature. But the past few years have shown that we can reduce a lot of suffering through corporate reform. Animal product alternatives are also very promising.

Also, "having our shit together in the long run" surely includes anti-speciesism (or at least much higher moral consideration for animals). Since EAs are some of the only people strategically working to spread anti-speciesism, it seems that this remains highly valuable on the margin.

Edited to add: It's possible that helping animals has become more valuable on the margin, as many people (EA and otherwise) may think similarly to you and divert resources to politics. Many animal advocates still think humans come first. Just a speculation.

View more: Next