Comment author: zdgroff 03 September 2018 07:25:09AM 1 point [-]

I talked to two people who said things that indicated they lean EA, asked them about if they identified that way, and then they told me they didn't because they associate EA with Singer-style act utilitarianism and self-imposed poverty through maximizing donated income.

This is interesting. What about them seemed EA-aligned? When I came across EA I was attracted to it because of the Singer-style act utilitarianism, and I've had worries that it's drifting too far from that and losing touch with the moral urgency that I felt in the early days. That said, I do think that actually trying to practice act utilitarianism leads to more mature views that suggest being careful about pushing ourselves too far.

Comment author: gworley3  (EA Profile) 03 September 2018 09:52:46PM 0 points [-]

Probably that they expressed interest in doing the most good possible for the world with their work.

Comment author: gworley3  (EA Profile) 30 August 2018 05:20:45PM 0 points [-]

Additional reflections from Marek, CEO of GoodAI, along with links to additional media coverage, including one about whether or not to publish dangerous AI research.

10

HLAI 2018 Field Report

Cross-posted from LW I spent the past week in Prague (natively called Praha, but known widely as Prague via the usual mechanisms others decide your own names) at the Human Level AI multi-conference held between the AGI, BICA, the NeSY conferences. It also featured the Future of AI track, which... Read More
Comment author: gworley3  (EA Profile) 17 August 2018 09:50:53PM 0 points [-]

What am I missing about virtue ethics/deontology that implies I shouldn't categorize them both into "means"?

I find this grouping weird as I think of virtue ethics (which, full disclosure, is most like the way I intuitively approach applied ethics) as the only one that's really about means rather than ends because it's about what generates your observable behaviors rather than the observable behaviors themselves.

To see what I mean, a deontological ethical system might say "don't kill other humans" so you don't kill other humans based on that rule. In a certain sense this is a means to an end that is similar to what a consequentialist ethics might select for—humans end up not being killed, but because you followed the rule—but it is also an end in itself—you don't do any killing. The only difference between deontological systems and consequentialist systems in this case is that a consequentialist system places the emphasis on (or, if you like, optimizes on) the global outcome while the deontological system focuses more directly on ones observable behavior. It's a question of where you put the pressure in this sense: either on the observable, desired outcomes in the world that imply the actions that generate the desired outcomes (consequentialism) or on the observable behaviors that generate the desired outcomes (deontology).

Virtue ethics is to me the odd-man-out here because it's focused on behavior that is not observable, or at least not observable by anyone other than the subject who is experiencing it. You can observe the outcomes of one who has virtue, which is intended to generate behaviors that will cause desirable outcomes in the world, but it pushes back the focus a level so that you optimize/emphasize the spontaneous generation of normative behavior rather than rules that generate normative behavior or normative behavior to produces a particular outcome.

Is it actually true that philosophers (generally) give the 3-category version over the 2-category version?

Philosophers use lots of different categorization methods for different reasons. The thing to keep in mind is that any groupings are always serving some purpose, so talking in terms of virtue, deon, and consequences is simply one useful way approaching ethical systems that helps us understand them. Other categorizations are possible and frequently used, and there is frequent disagreement among philosophers over where to draw particular lines or whether this or that ethical notion really belongs in some particular category.

Personally I find it helpful to think of these categorizations found throughout philosophy as useful for certain purposes in academic philosophy but not necessary useful to the objectives of philosophy itself except inasmuch as you are addressing your readers and helping them understand an idea in the context of other ideas they are familiar with. It's tempting to get caught up in constructing detailed models of philosophical ideas at the expense of the philosophical act itself.

Whatever the answers to #1 and #2, what do you find to be the most helpful categorical breakdown of normative ethics?

All that being said, I tend to think about descriptive vs. prescriptive approaches. As Hume argued, we can't directly derive the normative from what is normal without metaphysical speculation, and for myself I find no reason to believe it is necessary to speculate in a way that ends up implying any particular norms. Therefore I tend to think about categorizing ethics in terms of whether we are trying to discuss norms in terms of what we find (descriptive) or what we wish we found (prescriptive), my own inclinations running towards the descriptive.

Comment author: gworley3  (EA Profile) 14 August 2018 07:14:39PM 0 points [-]

I remain unsure with MSR how to calculate the measure of agents in worlds holding positions to trade with so that we can figure out how much we should acausally trade with each. Also, how to address uncertainty about if anyone will independently arrive at the same position you hold and so be able to acausally trade with you since you can't tell them about what you would actually prefer.

Comment author: Jan_Kulveit 19 July 2018 03:17:07PM *  14 points [-]

Feature request: integrate the content from the EA fora into LessWrong in a similar way as alignmentforum.org

Risks&dangers: I think there is non-negligible chance the LW karma system is damaging the discussion and the community on LW in some subtle but important way.

Implementing the same system here makes the risks correlated.

I do not believe anyone among the development team or moderators really understands how such things influence people on the S1 level - it seems somewhat similar to likes on facebook, and it's clear likes on Facebook are able to mess up with peoples motivation in important ways. So the general impression is people are playing with something possibly powerful, likely without deep understanding, and possibly with a bad model of what the largest impacts are (focus of ordering of content, vs. subtle impacts on motivation)

In situations with such uncertainty, I would prefer the risks to be less correlated

edit: another feature request: allow to add co-authors of posts. a lot of texts are created by multiple people, ant it would be nice if all the normal functionality worked

Comment author: gworley3  (EA Profile) 19 July 2018 09:32:02PM 1 point [-]

I agree that it would be nice if the EA forum was implemented similar to the way Alignment Forum is being done, although since that is itself still in beta maybe the timeline doesn't permit it right away. Maybe it's something that could happen later, though?

As to risks with voting and comparison to likes on Facebook, I guess the question would be is it any worse that any system of voting/liking content? If it's distorting discussions it seems unlikely that the change will be any worse than the existing voting system on this forum since they are structurally similar even if the weighted voting mechanism is new.

Comment author: gworley3  (EA Profile) 19 March 2018 05:36:06PM 0 points [-]

3

Comment author: cassidynelson 15 March 2018 03:44:06AM 2 points [-]

This article has many parallels with Greg Lewis' recent article on the unilateralist's curse, as it pertains to biotechnology development.

If participation in an SRO is voluntary, even if you have 9/10 organisations on board how do you stop the final from proceeding with AGI development without oversight? I'd imagine that the setup of a SRO may infer disadvantages to participants potentially, thus indirectly incentivizing non-participation (if the lack of restrictions increase the probability of reaching AGI first).

Do you anticipate a SRO may be an initial step towards a more obligatory framework for oversight?

Comment author: gworley3  (EA Profile) 15 March 2018 06:55:54PM 0 points [-]

An SRO might incentivize participation in several ways. One is idea sharing, be it via patent agreements or sharing of trade secrets among members on a secure forum. Another is via social and possibly legal penalties for non-participation, or by acting as a cartel to lock non-participants out of the market the way many professional groups do.

That said it does seem a step in the direction of legal oversight, but moves us towards a model similar to so-called technocratic regulatory bodies rather than one where legislation tries to directly control actions. Creating an SRO would give us an already-existing organization that could step in to serve this role in an official capacity if governments or inter-governmental organizations choose to regulate AI.

Comment author: gworley3  (EA Profile) 13 March 2018 07:19:39PM 3 points [-]

I think you are conflating EA with utilitarianism/consequentialism. To be fair this is totally understandable since many EAs are consequentialists and consequentialist EAs may not be careful to make or even see such a distinction, but as someone who is closest to being a virtue ethicist (although my actual metaethics are way more complicated) I see EA as being mainly about intentionally focusing on effectiveness rather than just doing what feels good in our altruistic endeavors.

4

Avoiding AI Races Through Self-Regulation

The first group to build artificial general intelligence or AGI stands to gain a significant strategic and market advantage over competitors, so companies, universities, militaries, and other actors have strong incentives to race to build AGI first. An AGI race would be dangerous, though, because it would prioritize capabilities over... Read More

View more: Next