The Ethics of Giving part one: Thomas Hill on the Kantian perspective on giving

A review and critique of the first section of the volume The Ethics of Giving: Philosophers' Perspectives on Philanthropy , edited by Paul Woodruff, with an emphasis on issues relevant to the decision making of Effective Altruists. Hill is a distinguished Kant scholar who looks at what Kantian theory has... Read More
Comment author: kbog  (EA Profile) 20 July 2018 04:25:24PM *  2 points [-]

If we upvote someone's comments then we trust them to be a better authority, so we should give them a greater weight in vote totals. So it seems straightforward that a weighted vote count is a better estimate of the quality of a comment.

The downside is that can create a feedback loop for a group of people with particular views. Having normal votes go from 1x to 3x over the course of so many thousands of karma seems like too small a change to make this happen. But the scaling of strong votes all the way up to 16x seems very excessive and risky to me.

Another downside is that it may encourage people to post stuff here that is better placed elsewhere, or left unsaid. I think that after switching to this system for a while, we should take a step back and see if there is too much crud on the forums.

Comment author: John_Maxwell_IV 20 July 2018 03:16:31AM *  2 points [-]

Great point. I think it's really interesting to compare the blog comments on slatestarcodex.com to the reddit comments on /r/slatestarcodex. It's a relatively good controlled experiment because both communities are attracted by Scott's writing, and slatestarcodex has a decent amount of overlap with EA. However, the character of the two communities is pretty different IMO. A lot of people avoid the blog comments because "it takes forever to find the good content". And if you read the blog comments, you can tell that they are written by people with a lot of time on their hands--especially in the open threads. The discussion is a lot more leisurely and people don't seem nearly as motivated to grab the reader's interest. The subreddit is a lot more political, maybe because reddit's voting system facilitates mobbing.

Digital institution design is a very high leverage problem for civilization as a whole, and should probably receive EA attention on those grounds. But maybe it's a bad idea to use the EA forum as a skunk works?

BTW there is more discussion of the subforums thing here.

Comment author: kbog  (EA Profile) 20 July 2018 04:17:04PM *  0 points [-]

My impression is that the subreddit comments can be longer, more detailed and higher quality than the blog comments. Maybe they are not better on average, but the outliers are far better and more numerous, and the karma sorting means the outliers are the ones that you see first.

Comment author: Jan_Kulveit 19 July 2018 03:17:07PM *  11 points [-]

Feature request: integrate the content from the EA fora into LessWrong in a similar way as alignmentforum.org

Risks&dangers: I think there is non-negligible chance the LW karma system is damaging the discussion and the community on LW in some subtle but important way.

Implementing the same system here makes the risks correlated.

I do not believe anyone among the development team or moderators really understands how such things influence people on the S1 level - it seems somewhat similar to likes on facebook, and it's clear likes on Facebook are able to mess up with peoples motivation in important ways. So the general impression is people are playing with something possibly powerful, likely without deep understanding, and possibly with a bad model of what the largest impacts are (focus of ordering of content, vs. subtle impacts on motivation)

In situations with such uncertainty, I would prefer the risks to be less correlated

edit: another feature request: allow to add co-authors of posts. a lot of texts are created by multiple people, ant it would be nice if all the normal functionality worked

Comment author: kbog  (EA Profile) 20 July 2018 04:13:03PM 2 points [-]

This forum is currently correlated with the EA subreddit with its conventional counting of votes, and if we went with a like system then it would be correlated with Facebook. I'm not sure what else you could do, aside from having no likes or votes at all, which would clearly be bad because it makes it very hard to find the best content.

Comment author: Naryan 11 July 2018 06:54:09PM 0 points [-]

I agree that markets are inefficient, but believe that the inefficiency results in opportunities that are both worse than average and better than average. Since I suspect most investors under-value the social impact, this would result in impact investments that are more attractive than average to someone who does value the impact as well as the return.

Generally when was looking to invest, I looked for options that I expected to outperform market average at a set risk level, and I didn't assess social utility in that calculation (assuming I could donate the return more effectively, as you suggest). I'm not sure if this logically follows, but if my choice is between effective charity and impact investment, generally an effective charity would do more good. But if I'm considering my retirement fund, I believe the right impact investment could be better than a comparable equity investment - I just need to remember to include the social utility in my valuation.

Comment author: kbog  (EA Profile) 12 July 2018 03:04:22AM *  0 points [-]

Unless you assign relatively high priority to the cause that is addressed by the company, I think it's appropriate to suppose that other impact investors are over-valuing the social impact. Also, since other impact investors don't think about counterfactuals, they are likely to greatly overestimate the social impact. They may think that when they invest $1000 in a different company, they are actually making that company $1000 richer on balance... when in reality it is only $100 or $10 or $1 richer in the long run, due to market efficiency. I don't think markets are generally inefficient, just a bit, sometimes, it really depends on how you define it.

Comment author: kbog  (EA Profile) 11 July 2018 12:32:16PM *  4 points [-]

If capital markets are efficient and most people aren't impact investors, then there is no benefit to impact investing, as the coal company can get capital from someone else for the market rate as soon as you back out, and the solar company will lose most of its investors unless it offers a competitive rate of return. At the same time, there is no cost to impact investing.

In reality I think things are not always like this, but not only does inefficiency imply that impact investing has an impact, it also implies that you will get a lower financial return.

For most of us, our cause priorities are not directly addressed by publicly traded companies, so I think impact investing falls below the utility/returns frontier set by donations and investments. You can pick a combination of greedy investments and straight donations that is Pareto superior to an impact investment. If renewable energy for instance is one of your top cause priorities, then perhaps it is a different story.

Comment author: turchin 08 July 2018 02:09:55PM 0 points [-]

What if AI exploring moral uncertainty finds that there is provably no correct moral theory or right moral facts? It that case, there is no moral uncertainty between moral theories, as they are all false. Could it escape this obstacle just by aggregating human's opinion about possible situations?

Comment author: kbog  (EA Profile) 11 July 2018 12:09:16PM *  1 point [-]

What if AI exploring moral uncertainty finds that there is provably no correct moral theory or right moral facts?

In that case it would be exploring traditional metaethics, not moral uncertainty.

But if moral uncertainty is used as a solution then we just bake in some high level criteria for the appropriateness of a moral theory, and the credences will necessarily sum to 1. This is little different from baking in coherent extrapolated volition. In either case the agent is directly motivated to do whatever it is that satisfies our designated criteria, and it will still want to do it regardless of what it thinks about moral realism.

Those criteria might be very vague and philosophical, or they might be very specific and physical (like 'would a simulation of Bertrand Russell say "a-ha, that's a good theory"?'), but either way they will be specified.

Comment author: kbog  (EA Profile) 04 July 2018 12:09:54AM *  2 points [-]

I disagree with 5. Under subjective probability theory it is not really coherent to think that one's expectation is inaccurate. You probably mean to say that they are difficult to predict precisely, but that's generally not relevant if we are maximizing expected value.

Comment author: kbog  (EA Profile) 03 July 2018 03:38:41AM 1 point [-]

There are so many incredibly fun video games these days.

In response to comment by kbog  (EA Profile) on 1. What Is Moral Realism?
Comment author: Lukas_Gloor 31 May 2018 08:53:53AM *  1 point [-]

Do you think your argument also works against Railton's moral naturalism, or does my One Compelling Axiology (OTA) proposal introduce something that breaks the idea? The way I meant it, OTA is just a more extreme version of Railton's view.

I think I can see what you're pointing to though. I wrote:

Note that this proposal makes no claims about the linguistic level: I’m not saying that ordinary moral discourse let’s us define morality as convergence in people’s moral views after philosophical reflection under ideal conditions. (This would be a circular definition.) Instead, I am focusing on the aspect that such convergence would be practically relevant: [...]

So yes, this would be a bad proposal for what moral discourse is about. But it's meant like this: Railton claims that morality is about doing things that are "good for others from an impartial perspective." I like this and wanted to work with that, so I adopt this assumption, and further add that I only want to call a view moral realism if "doing what is good for others from an impartial perspective" is well-specified. Then I give some account of what it would mean for it to be well-specified.

In my proposal, moral facts are not defined as that which people arrive at after reflection. Moral facts are still defined as the same thing Railton means. I'm just adding that maybe there are no moral facts in the way Railton means if we introduce the additional requirement that (strong) underdetermination is not allowed.

Comment author: kbog  (EA Profile) 13 June 2018 03:14:07AM *  0 points [-]

Yes I think it applies to pretty much any other kind of naturalism as well. At least, any that I have seen.

View more: Next